-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Google Cloud Professional Certifications #217
Comments
From my side i am studying through that course. |
Professional Cloud DeveloperA Professional Cloud Developer builds scalable and highly available applications using Google-recommended tools and best practices. This individual has experience with cloud-native applications, developer tools, managed services, and next-generation databases. A Professional Cloud Developer also has proficiency with at least one general-purpose programming language and instruments their code to produce metrics, logs, and traces. GCP Computing ServicesFaaS (functions as a service) PaaS (platform as a service) CaaS (containers as a service) IaaS (infrastructure as a service) Choosing compute options GCP DevOpsCode as InfrastructureInfrastructure as code (IaC) is the process of managing and provisioning computer data centers through machine-readable definition files using the popular GitOps methodology. GitOps key concept is using a Git repository to store the environment state that you want. Terraform is a HashiCorp open source tool that enables you to predictably create, change, and improve your cloud infrastructure by using code. You use Cloud Build, a Google Cloud continuous integration service, to automatically apply Terraform manifests to your environment. Operations Suite for profiling and debugging in production Cloud Operations Suite in a minute GCP Design PatternsMicroservicesMicroservices architecture enable you to break down a large application into smaller independent services, with each service having its own realm of responsibility. Messaging MiddlewaresCloud Pub\Sub GCP Storage and Databases |
GCP Development SetupGCP dev tools:
|
Fundamentals of Application DevelopmentApplication Development MethodologiesApplication Development Best PracticesBest practices for developing cloud applications ensure secure, scalable, resilient applications with loosely coupled services that can be monitored and can fail gracefully on error . Patterns for scalable and resilient apps Some key best practices concepts include:
Managing your applications code and environmentCode RepositoryStore your applications code in a code repository, such as Git or subversion. Dependency ManagementDon’t store external dependencies such as JAR files or external packages in your code repository. Instead, depending on your application platform, explicitly declare your dependencies with their versions and install them using a dependency manager. For example, for a Node js application, you can declare your application dependencies in a package dot json file, and later install them using the NPM install command. Separate your applications configuration settings from your codeDon’t store configuration settings as constants in your source code. Instead, specify configuration settings as environment variables. This enables you to easily modify settings between development test and production environments. Implement automated testingImplement build and release systemsBuild and release systems enable continuous integration and delivery. While it’s crucial that you have repeatable deployments, it’s also important that you have the ability to roll back to a previous version of the app in a few minutes if you catch a bug in production. Cloud Build is GCP's CI\CD service that builds pipelines, constructs deployment artefacts, and has build-in testing and security. It's important to consider security throughout the continuous integration and delivery process. With a Sec DevOps approach, you can automate security checks, such as confirming whether you're using the most secure versions of third party software and dependencies, you're scanning code for security vulnerabilities, confirming that resources have permissions based on principles of least privilege, and detecting errors in production and rolling back to the last stable built. Instead of using the default Cloud Build service account, you can specify your own service account to execute builds on your behalf. You can specify any number of service accounts per project. Maintaining multiple service accounts enables you to grant different permissions to these service accounts depending on the tasks they perform. For example, you can use one service account for building and pushing images to the Container Registry and a different service account for building and pushing images to Artifact Registry. Implement microservices-based architecturesMicro services enable you to structure your application components in relation to your business boundaries. In this example, the UI, payment, shipping and order services are all broken up into individual micro services. Use event driven processing where possibleRemote operations can have unpredictable response times and can make your application seem slow. Keep the operations in the user thread at a minimum. Perform backend operations asynchronously. Use event driven processing where possible. Design for loose couplingDesign application components so that they are loosely coupled at runtime, tightly coupled components can make an application less resilient to failures, spikes in traffic and changes to services. An intermediate component such as a message queue can be used to implement loose coupling, perform asynchronous processing, and buffer requests in case of spikes in traffic. You can use a Cloud Pub Sub topic as a message queue. Publishers can publish messages to the topic and subscribers can subscribe to messages from this topic. In the context of HTTP API payloads, consumers of HTTP APIs should bind loosely with the publishers of the API. In the example, the email service retrieves information about each customer from the customer service. The customer service returns the customer’s name, age and email address and its payload. To send an email, the email service should only reference the name and email fields in the payload. It should not attempt to bind with all the fields in the payload. This method of loosely binding fields will enable the publisher to evolve the API and add fields to the payload in a backwards compatible manner. Implement application components so that they don’t store state internally, or access a shared state. Design each application so that it focuses on compute tasks onlyDesigning each application so that it focuses on compute tasks only enables you to use a worker pattern to add or remove additional instances of the component for scalability. Application components should start up quickly to enable efficient scaling, and shut down gracefully when they receive a termination signal. For example, if your application needs to process streaming data from IoT devices, you can use a Cloud Pub Sub topic to receive the data. You can then implement Cloud functions that are triggered whenever a new piece of data comes in. Cloud Functions can process, transform, and store the data. Cache application dataCaching content can improve application performance and lower network latency. Cache application data that is frequently accessed or that is computationally intensive to calculate each day. When a user requests data, the application component should check the cache first. If data exists in the cache, meaning the TTL has not expired, the application should return the previously cached data. If the data does not exist in the cache, or has expired, the application should retrieve the data from backend data sources and recompute results as needed. Implement API gateways to make backend functionality available to consumer applicationsYou can use Cloud Endpoints to develop, deploy, protect and monitor APIs based on the open API specification, or GRPC. Also, the API for your application can run on backends such as App Engine, GKE, or Compute Engine. If you have legacy applications that cannot be refactored and moved to the Cloud, consider implementing API as a facade or adapter layer. Use Federated Identity Management for user managementDelegate user authentication to external identity providers such as Google, Facebook, Twitter, or GitHub. Monitor the status of your application and servicesIt's important to monitor the status of your application and services to ensure that they're always available and performing optimally. The monitoring data can be used to automatically alert operations teams as soon as the system begins to fail. Operations teams can then diagnose and address the issue promptly. Unless otherwise noted, Google Cloud health checks are implemented by dedicated software tasks that connect to backends according to parameters specified in a health check resource. Each connection attempt is called a probe. Google Cloud records the success or failure of each probe. Treat your logs as event streamsLogs constitute a continuous stream of events that keep occurring as long as the application is running. Don't manage log files in your application. Instead, write to an event stream such as standard out and let the underlying infrastructure collate all events for later analysis and storage. With this approach, you can set up logs based metrics and trace requests across different services in your application. With Google Operations you can debug your application, setup error monitoring, setup logging and logs-based metrics, trace requests across services, and monitor applications running in a multi-cloud environment. Implement retry logic with exponential back-off and fail gracefully if the errors persistWhen accessing services and resources in a distributed system, applications need to be resilient to temporary and long-lasting errors. Resources can sometimes become unavailable due to transient network errors. In this case, applications should implement retry logic with exponential back-off and fail gracefully if the errors persist. Identify failure scenarios and create disaster recovery plansIdentify people, processes and tools for disaster recovery. Initially, you can perform tabletop tests. These are tests in which teams discuss how they would respond in failure scenarios but don't perform any real actions. This type of test encourages teams to discuss what they would do in unexpected situations. Then, simulate failures in your test environment. After you understand the behaviour of your application under failure scenarios, address any problems and refine your disaster recovery plan. Then test the failure scenarios in your production environment. Consider data sovereignty and compliance requirementsSome regions and industry segments have strict compliance requirements for data protection and consumer privacy. Consider using the strangler pattern when re-architecting and migrating large applicationsIn the early phases of migration,you might replace smaller components of the legacy application with newer application components or services. You can incrementally replace more features of the original application with new services. A strangler facade can receive requests Application Design Pattern:The 12 factors is an approach that helps programmers write modern apps in a declarative way, using clear contracts deployed via cloud. Cloud Guru: Application Design SummaryManaged platforms, design and migration patternsDifferent managed platforms available to us, their scaling velocity, and some of their trade-off. Remember, Cloud native software developmentAgile principles for cloud native software development which always uses source control management systems Message buses to decouple microservicesConcept of message buses, Cloud Pub/Sub in particular, and how services like this can help us decouple our microservices into producers and consumers of data using topics and subscriptions. Deployment methodologiesAutomated deployment methodologies with tools like Cloud Build, helping us to safely promote changes through our different environments and into production. High-level best practices for securing the source of your compute and container imagesCreating an auditable build pipeline for your changes, so you can be confident that what you put into production is really supposed to be there. |
GCP storage and database optionsNon-structured dataCloud StorageCloud storage is the logical home for any unstructured data. such as binary blobs, videos and images, or other proprietary files. Cloud Storage featuresCloud Storage provides object storage buckets where files are stored as objects inside these buckets. Standard class is the choice for most cases with one of the other options for backups or other long-term storage. These storage classes will apply to a bucket and become the default storage class for objects inside that bucket. However, you can change the storage class of an individual object, using object lifecycle management. Cloud storage also provides us with some useful data retention features. Another great feature of cloud storage is signed URLs. Rather than make a storage object's URL public, you can configure a signed URL, which contains authentication information within the URL itself, allowing whoever has this URL the specific permission to read this object for a specific period of time only. Signed URLs can be generated programmatically or with the gsutil command line tool. Cloud Storage bucket can be configured to host a static website for a domain you own. Static web pages can contain client-side technologies such as HTML, CSS, and JavaScript. They cannot contain dynamic content such as server-side scripts like PHP. Because Cloud Storage doesn't support custom domains with HTTPS on its own, this tutorial uses Cloud Storage with HTTP(S) Load Balancing to serve content from a custom domain over HTTPS. For more ways to serve content from a custom domain over HTTPS, see troubleshooting for HTTPS serving. You can also use Cloud Storage to serve custom domain content over HTTP, which doesn't require a load balancer. To ensure that Cloud Storage auto-scaling always provides the best performance, you should ramp up your request rate gradually for any bucket that hasn't had a high request rate in several days or that has a new range of object keys. If your request rate is less than 1000 write requests per second or 5000 read requests per second, then no ramp-up is needed. If your request rate is expected to go over these thresholds, you should start with a request rate below or near the thresholds and then double the request rate no faster than every 20 minutes. If you run into any issues such as increased latency or error rates, pause your ramp-up or reduce the request rate temporarily in order to give Cloud Storage more time to scale your bucket. You should use exponential backoff to retry your requests when: Receiving errors with 408 and 429 response codes. Lab
Analytic DataCloud BigTable is Google's petabyte-scale, wide-column NoSQL database designed for high throughput and scalability.
Data BigQuery is Google's other petabyte-scale data platform. BigQuery is designed to be your big data analytics warehouse, storing incredible amounts of data but in a familiar relational way. This enables data analysts to query these enormous datasets with simple SQL statements. BigQuery can retain historical data for very little cost, the same as cloud storage itself, so you can use it for analytics that form the foundations of business intelligence systems or train machine learning models on its datasets. There are also multiple public datasets available, covering everything from baby names and taxi journeys to medical information and weather data. Relational Structured DataCloud Spanner is Google's global SQL based relational database. It's a proprietary product that provides horizontal scalability and high availability and strong consistency. It's not cheap to run, but if your business needs all three of these things, such as in financial sector, Spanner could be the answer. Cloud SQL provides managed instances of MySQL, Postgres and Microsoft SQL server and removes the requirement for you to provision and configure your own machines. Choosing a primary keyOften your application already has a field that's a natural fit for use as the primary key. For example, for a Customers table, there might be an application-supplied CustomerId that serves well as the primary key. In other cases, you may need to generate a primary key when inserting the row. This would typically be a unique integer value with no business significance (a surrogate primary key). In all cases, you should be careful not to create hotspots with the choice of your primary key. For example, if you insert records with a monotonically increasing integer as the key, you'll always insert at the end of your key space. This is undesirable because Spanner divides data among servers by key ranges, which means your inserts will be directed at a single server, creating a hotspot. There are techniques that can spread the load across multiple servers and avoid hotspots: NoSQL DataCloud Firestore is Google's fully-managed NoSQL document database, designed for large collections of small JSON documents. Cloud Firestore offers some amazing features like strong consistency and mobile SDKs that support offline data. Note: Connecting to managed databasesConnecting to managed database is the same as with a non-managed version via connection strings. In addition, LAB In this lab, we create a MySQL database and then securely connect to it with a service account using [Cloud SQL Auth proxy](https://cloud.google.com/sql/docs/mysql/sql-proxy). We also upload some pre-generated data and run some simple queries. [Github repo](https://github.com/linuxacademy/content-google-certified-pro-cloud-developer) Create a MySQL 2nd Generation Cloud SQL InstanceOur first task is to create our MySQL 2nd generation Cloud SQL instance:
Note: It will take up to 10 minutes to create the database instance. You can complete the next objective while you wait. Create the VMNext, we need to create a virtual machine:
Create the Service Account Used to Connect with Cloud SQL SecurelyWith our VM created, we now need to create the service account we'll use to connect with Cloud SQL securely:
Upload the Key to the VMNow we'll upload the key to our VM and configure the
Configure the MySQL Client and Cloud SQL ProxyWith our VM set, we can configure the MySQL client and Cloud SQL Proxy:
Enable an API for the Cloud SQL ProxyFor this step, we must first enable an API for the Cloud SQL Proxy and grab the connection name for our DB:
Create a Secure Connection to the DatabaseNow we can run the Cloud SQL proxy to create a secure connection to the database using the service account we made using: **Note** To use the Cloud SQL Auth proxy, you must meet the following requirements: - The Cloud SQL Admin API must be enabled. - You must provide the Cloud SQL Auth proxy with Google Cloud authentication credentials - You must provide the Cloud SQL Auth proxy with a valid database user account and password. - The instance must either have a public IPv4 address, or be configured to use private IP.
Lab In this lab, we create a Python Flash web application that adds, stores, and tracks the names and birth years of famous computer scientists. We store that data in a NoSQL document store so that we can interact with the database records as JSON documents. To make sure that the database is consistent and globally available, we create a Cloud Firestore database and connect it to our app, which runs in the Cloud Run serverless platform. [Git Hub repo](https://github.com/linuxacademy/content-google-certified-pro-cloud-developer) Activate Cloud Shell and the Required APIsFirst, we need to set up our Cloud Shell:
Create a Firestore Collection and DocumentsWith our Cloud Shell set, we can move on to working with Firestore:
Deploy and Test the Flask ApplicationIn the following instructions, substitute
Lab In this lab, we will create a Cloud Bigtable instance, create and write data to our table, and then query that data with the HBase shell. Create a Cloud Bigtable InstanceOur first step is to create a Bigtable instance:
Connect to Bigtable with HBase
git clone https://github.com/GoogleCloudPlatform/cloud-bigtable-examples.git
Create and Write Data to a Table with HBase
describe 'vehicles'
Query the Table's Data with HBase
|
Developing applications with GCE, or Google Compute EngineWhen comparing GCP managed services, we talked about Compute Engine, (half-managed) Kubernetes engine, Cloud Run, and Cloud Functions...where does Compute Engine stand out? With Compute Engine we're deploying virtual machine instances. Each VM is a complete virtual server that requires an operating system and software. Compute EngineSo why would you choose Compute Engine over other platforms? Compute Engine is simply, managed virtual machines, usually running a version of Linux or sometimes Windows. Compute Engine also adds plenty of configuration and automation options to make your VMs first-class cloud-native citizens alongside all of those other services such as custom disk images, firewall rules, network tags, metadata server., and startup scripts to bootstrap an applications. Using Bootstrap Scripts in Google Compute EngineIn this lab, we automate the basic set up of a new server in Google Compute Engine by deploying a simple bootstrap script using the bootstrap script stored in Cloud Storage that references the Compute Engine metadata server. Deploy Apache with a Startup Script
Deploy a Startup Script from a Cloud Storage Bucket
Bootstrap an application in GCE through a custom image as a boot diskYou can prepare an image in advance based on a standard boot disk that also includes your application and any necessary startup scripts to get it working. Another useful tool you can use with custom disk images is the GCE metadata server. We can call the metadata server from within your startup scripts to configure your application in different ways, based on metadata about your instance. For example, you could set environment-specific variables or make decisions based on an instance's location. Let's take a quick look at this in action. Private Google Access.VM instances that only have internal IP addresses (no external IP addresses) can use Private Google Access. They can reach the external IP addresses of Google APIs and services. The source IP address of the packet can be the primary internal IP address of the network interface or an address in an alias IP range that is assigned to the interface. If you disable Private Google Access, the VM instances can no longer reach Google APIs and services; they can only send traffic within the VPC network. Managing Google Compute Engine Images and Instance GroupsA regional managed instance group automates the creation of the instances for you and distributes them across multiple zones in a region for high availability. Managing groups of identical virtual machines can provide extra reliability and resilience in your infrastructure, as each individual machine comes as a disposable component—easily replaced from a template that you have previously defined. In this lab, we'll set up a "golden image" for our desired Compute Engine instance and use it to create an instance template. Then we'll deploy a group of managed instances based on this template that are distributed across a region for high availability.
Create a Golden Image for a Web Server
Create an Instance Template
9.. Click Create. Create a Regional Managed Instance Group
Load BalancerThe final piece of the puzzle for GCE deployment is Google's load balancer service. Global Load Balancing with Google Compute EngineOnce you have successfully set up a group of managed instances, you have moved from the "pets" model to the "cattle" model, looking after a template rather than an individual machine so instances themselves can come and go as necessary. The final piece of the puzzle is how you direct users and traffic to your instances — and only to the healthy ones. In this lab, we will set up a managed instance group and then use a Google Cloud load balancer to manage incoming requests from the outside world. On the lab page, right-click Open Google Console and select the option to open it in a new private browser window (this option will read differently depending on your browser — e.g., in Chrome, it says "Open Link in Incognito Window"). Then, sign in to Google Cloud Platform using the credentials provided on the lab page. On the Welcome to your new account screen, review the text, and click Accept. In the "Welcome Cloud Student!" pop-up once you're signed in, check to agree to the terms of service, choose your country of residence, and click Agree and Continue. Set up a Firewall Rule
Set up a HTTP Health Check
Create the Instance Template and Managed Instance Group
Create an HTTP Load Balancer
LoggingThe Cloud Logging agent can gather logs from many common applications or from custom log files you define. Preemptive Shut DownsCompute Engine sends a preemption notice to the instance in the form of an ACPI G2 Soft Off signal. You can use a shutdown script to handle the preemption notice and complete cleanup actions before the instance stops. TroubleshootingEx: You have created a managed instance group, but instances inside the group are constantly being deleted and recreated. What is a common issues that could cause this behaviour? If you have defined a health check (for example, an HTTP check) for your group, you need to ensure you have sufficient firewall configuration to allow Google's probes to connect to your instances. Otherwise, the health checks will fail, and the instances will be marked as unhealthy and recreated. Errors in the instance template will cause instance creation inside the group to fail. These can include specifying source images that no longer exist, or attempting to attach the same persistent disk in read/write mode to multiple instances. |
Developing Applications with GKETHE ILLUSTRATED CHILDREN’S GUIDE TO KUBERNETES thanks, Josh!Orchestrating containers in productionKubernetes ClusterAt a basic level, a Kubernetes cluster contains two primary types of computer or VM. First you have masters, which provide the components that make up the control plane. This is responsible for making decisions about your cluster, such as scheduling containers. Kubernetes Cluster MasterKubernetes masters run components, which provide the control plane for the cluster. First, there's the scheduler for scheduling workloads. When you want to deploy a container, the scheduler will pick a node to run that container on. The node it picks can be affected by all kinds of factors, such as the current load on each available node and the requirements of your container. Kubernetes Cluster NodeNodes are a lot more straightforward than the master. There's the kubelet, which is an agent for Kubernetes. It communicates with the control plane and takes instructions, Note The entire control plane and all of the components are fully managed for you by Google when you create a GKE cluster. Containers, Pods, Replica Sets, and DeploymentContainersContainers themselves contain all of our application code and libraries and are defined by a Docker file. PodsA Pod is a logical application-centric unit of deployment, and it's the smallest thing that you can deploy to Kubernetes. Replica SetsReplica Sets introduce some scaling and resilience using some other simple Kubernetes objects. Normally we run multiple copies of a Pod called replicas inside something called a replica set. A replica set contains a template that is used to create each replica Pod and a definition of how many replicas it should run. DeploymentA deployment is the most common way to deploy an application to Kubernetes as it give us lots of extra useful logic for getting our apps safely into production. For example, let's say we're running version 1 of our amazing app in this deployment. Our replica set has three replicas, each an identical Pod that can handle requests to our app. But now let's say we want to deploy version 2. How can we do that safely? Well, we simply update the configuration of our deployment and the deployment manager will do something really clever. ServiceA Kubernetes service object exposes the groups of Pods to the network. It does this by creating a single fixed IP address and routing incoming traffic to a group of Pods. We add labels to our replica set -- things like app equals NGINX, env equals prod -- and then we use a selector in our service to match those labels and route traffic. Now, if you recall how deployments allow us to safely update Pods, you'll see that services are only going to route traffic to healthy Pods. Note Note Workload IdentityThat’s why we introduced Workload Identity, a new way to help reduce the potential “blast radius” of a breach or compromise and management overhead, while helping you enforce the principle of least privilege across your environment. It does so by automating best practices for workload authentication, removing the need for workarounds and making it easy to follow recommended security best practices. By enforcing the principle of least privilege, your workloads only have the minimum permissions needed to perform their function. Because you don’t grant broad permissions (like when using the node service account), you reduce the scope of a potential compromise. ProvisionigPersistentVolume resources are used to manage durable storage in a cluster. In GKE, a PersistentVolume is typically backed by a persistent disk. You can also use other storage solutions like NFS. Filestore is a NFS solution on Google Cloud. A PersistentVolumeClaim is a request for and claim to a PersistentVolume resource. PersistentVolumeClaim objects request a specific size, access mode, and StorageClass for the PersistentVolume. If a PersistentVolume that satisfies the request exists or can be provisioned, the PersistentVolumeClaim is bound to that PersistentVolume. Pods use claims as volumes. The cluster inspects the claim to find the bound volume and mounts that volume for the Pod. Resource sharing and quotas:Resource-sharing policy for applications used by different teams in a Google Kubernetes Engine cluster need to ensure that all applications can access the resources needed to run via the following: A resource quota, defined by a ResourceQuota object, provides constraints that limit aggregate resource consumption per namespace. It can limit the quantity of objects that can be created in a namespace by type, as well as the total amount of compute resources that may be consumed by resources in that namespace. A LimitRange is a policy to constrain the resource allocations (limits and requests) that you can specify for each applicable object kind (such as Pod or PersistentVolumeClaim) in a namespace. Lab Deploying WordPress and MySQL to GKEIn this lab, we will create a reasonably complex application stack on Google Kubernetes Engine, creating deployments for WordPress and MySQL, utilizing persistent disks. To complete this lab, you should have some basic experience and familiarity with Google Cloud Platform and Google Kubernetes Engine. Create the GKE Cluster and Storage Class
Create Persistent Volumes
Deploy MySQL
Deploy WordPress
Managing DeploymentsWithout constraints, GKE will let a container use all of the available resources of a node. So it's important to understand the requirements of your container and plan accordingly. CPU, Memory, Requests, LimitsAmong other things, Kubernetes lets you define settings for the CPU and memory that a container can use. These definitions are done in two ways: using requests and limits. NamespacesWhen you deploy an object to Kubernetes without specifying a namespace, it runs in the default namespace, but you can create any number of additional custom namespaces as a logical separation for applications, environments, or even teams. RBAC and IAMTo control security and access for people who operate or deploy to GKE, Kubernetes uses role-based access control, or RBAC, which you can combine with GCP's identity and access management, or IAM. Workload IdentityTo lock down the permissions of workloads actually running on the cluster, which we can do with workload identity. Operating Kubernetes EngineHow to manage, maintain, and troubleshoot application deployments on GKE. Pod and Container LifecycleTo help you operate GKE effectively and know what's going on under the hood, it's important to understand the lifecycle of Pods and containers. Whenever a Pod is created, either on its own or part of a replica set or deployment, the Pod state goes through various different phases. First, the Pod is pending. This means its definition has been accepted by the cluster, but it's not yet ready to serve, usually because one or more of the containers inside the Pod is not yet ready. Common ErrorsTwo of the most common errors to look for in a failed container are error image pull and crash loop backoff. GKE Logging and MonitoringGKE is tightly integrated with Cloud Operations for logging and monitoring. Logs can be viewed through the GKE dashboard by drilling down through individual deployments, Pods, and services, or in dedicated operations dashboards. Custom and External MetricsCustom and external metrics can also be used to determine the behavior of the horizontal Pod autoscaler, which, as we know, can add new Pods to a deployment. Custom metrics are metrics reported to Cloud Monitoring by our own application -- for example, things like queries per second, latency from dependencies, or anything else you like. Resources |
Developing Serverless Applications with CGPApp EnginePaaS (platform as a service) App Engine StandardLet's take a quick look at an App Engine standard app in the GCP console\App Engine section. Cloud FunctionFaaS (function as a service) One of the primary differentiators for cloud functions is that it is event driven. There are many different events that can trigger a function. Generally we group them into two categories for triggering two types of functions: HTTP functions and background functions. Writing Cloud FunctionsTo write a cloud function we have to conform to one of the supported function's runtimes, which are Node versions 8 and 10, Python 3, Go, Java, .Net, or Ruby. We can't use our own runtimes here as these have been optimized by Google to run as efficiently as possible. Use CasesCreate and deploy a cloud function in the GCP consoleIn the GCP console in the cloud functions section, click create function. I can pick any of the supported runtimes and you can see that the editor prepares some boilerplate code for my function. How to secure cloud functionsLike most GCP resources, functions run with their own identity, a dedicated custom service account that you create and manage. Using a custom service account allows you to specify granular permissions on who and what can access your function, and other things your function can access itself. For example, when you deploy an HTTP function, it is secured by default and will only accept authenticated requests. For background functions they can only be invoked by the event source to which they are subscribed, such as a Pub/Sub or cloud storage event. If the function being called needs access to other GCP resources or APIs, it's a good idea to give it a custom identity to control the level of access that it has inside your project. Best practices for using cloud functions.First of all, don't start background activities in your function. Summary:Cloud Functions must be written in a supported language and can be triggered by either an HTTP request or a background event. These are perfect for event driven workloads like processing Internet of Things data, acting as a web hook for another application, or just generally providing logic or glue between other things in your stack. Lab: Create an HTTP Google Cloud FunctionGoogle Cloud Functions are a fully managed and serverless way to run event-driven code. They react to demand from zero to planet-scale and come with integrated monitoring, logging, and debugging. All you need to do is plug in your code! In this lab, we will introduce ourselves to Cloud Functions by writing our first function that will simply respond to an HTTP trigger; in other words, our function will run when you send a request to its URL. Enable APIs and Set Up the Cloud Shell
Write the Hello World Function
Deploy and Test the Hello World Function
Lab: Triggering a Cloud Function with Cloud Pub/SubCloud Functions can be triggered in 2 ways: through a direct HTTP request or through a background event. One of the most frequently used services for background events is Cloud Pub/Sub, Google Cloud’s platform-wide messaging service. In this pairing, Cloud Functions becomes a direct subscriber to a specific Cloud Pub/Sub topic, which allows code to be run whenever a message is received on a specific topic. In this hands-on lab, we’ll walk through the entire experience, from setup to confirmation. Enable Required APIs
Create Pub/Sub Topic
Create a Cloud Function
import base64
Publish Message to Topic From Console
Confirm Cloud Function Execution
Trigger Cloud Function Directly From Command Line
Publish Message to Topic From Command Line
Cloud RunCaaS (container as a service) Cloud Run, at a very high level, takes the same serverless model as Cloud Functions, but changes the deployment artifact from a piece of code to a container. This gives us a containers as a service model. So choosing between these two models really depends on what type of serverless application you're trying to deploy and you need to keep in mind the constraints and differences of each choice. Cloud Functions has a limited set of supported runtimes. Broadly speaking, Cloud Functions is perfect if and only if your use case fits into its model and requirements. More generally then, Cloud Run will give you the most choice but with slightly more overhead. Cloud Run app in the GCP consoleCloud Run, a fully managed serverless execution environment that lets you run stateless HTTP-driven containers, without worrying about the infrastructure. Cloud Run for Anthos, which lets you deploy Cloud Run applications into an Anthos GKE cluster running on-prem or in Google Cloud. Our commitment to Knative, the open API and runtime environment on which Cloud Run is based, bringing workload portability and the serverless developer experience to your Kubernetes clusters, wherever they may be. Revisions and Traffic SplittingTriggers and ScheduleSummary:Cloud Run can be invoked from HTTP requests or they can be event driven, but Cloud Run can also handle multiple concurrent connections per instance, and then spin itself back down to zero when there is no further demand. Lab: Cloud Run Deployments with CI/CDIntroductionIn this lab, we’ll configure a continuous deployment pipeline for Cloud Run using Cloud Build. We'll set up Cloud Source Repositories and configure Cloud Build to automate our deployment pipeline. Then, we'll commit changes to Git and observe the fully automated pipeline as it builds and deploys our new image into service. Enable APIs and Create the Git Repo
Commit Application Code
Set Up Cloud Build
steps:
- name: 'gcr.io/cloud-builders/docker'
args: ['build', '-t', 'gcr.io/$PROJECT_ID/amazingapp:$COMMIT_SHA', '.']
- name: 'gcr.io/cloud-builders/docker'
args: ['push', 'gcr.io/$PROJECT_ID/amazingapp:$COMMIT_SHA']
- name: 'gcr.io/cloud-builders/gcloud'
args:
- 'run'
- 'deploy'
- 'amazingapp'
- '--image'
- 'gcr.io/$PROJECT_ID/amazingapp:$COMMIT_SHA'
- '--region'
- 'us-east1'
- '--platform'
- 'managed'
- '--allow-unauthenticated'
images:
- gcr.io/$PROJECT_ID/amazingapp:$COMMIT_SHA
Set Up Build Triggers
Serverless APIsCloud EndpointsThe first option uses the sidecar method with the Nginx version of the extensible service proxy or ESP. The second option uses the remote proxy method using the Envoy version of the proxy. Both of these methods took quite a bit of configuration to set up, but the results are worth it. API GatewayAs a fully managed service, using API Gateway is somewhat more simple than Cloud Endpoints. Summary:Cloud Endpoints provides an Nginx proxy service for App Engine, Compute Engine, and GKE, as well as a newer Envoy based proxy for App Engine, Cloud Run, and Cloud Functions. Google ML APIGoogle's pre-trained off-the-shelf APIs which use machine learning models that Google has already trained for millions Cloud Vision APICurrently, the Cloud Vision API supports the following features: Cloud Video Intelligence APIThe Video Intelligence API is like the Vision API but for videos instead of images. Cloud Speech to Text APIYou simply provide the API with an audio file containing some speech, and it will return a text transcript. Cloud Text to Speech APIYou simply provide the API with an audio file containing some speech, and it will return a text transcript. Translation APIYou simply send it some text to convert from one language to another. Auto ML APIThis is Google's managed machine learning service that lets you create your own models by using the existing pre-trained ones as a jumping off point. Auto ML supports video intelligence, vision, natural language, and translation models with a few more in beta at the time of this recording. Using auto ML, you find tune the existing models using your own data or classifications with minimal effort and little to no machine learning expertise. |
GCP Application SecurityOverviewYou configure IAM by defining policies or rules that allow you to specify who has what access to which resource. An easy way to manage access to staff is to add them to an appropriate Google group and then grant the role to the group. Individual users can then be moved in and out of groups as required. Service AccountsOne of the types of member we can use in an IAM policy is the service account. When you create a project in GCP, This is particularly good practice when dealing with either calls to Google APIs or when you're deploying microservices, or even when you're doing both. Each part of the stack should operate with its own individual identity, which can then be used to allow it to talk to the other parts of the stack that you want it to and only those parts, or the Google APIs that you have granted access to using its unique service account identity. Security VulnerabilitiesWeb security scanning and container analysis are two features that can come in really handy and add to your security toolkit when you're developing your applications. Best way of preventing exploits over your application is to spot the vulnerabilities before you even deploy it, Note: scans are not a substitute for manual security checks, secure design, and good security best practices! Best PracticesOrganizational SecurityThink about your organizational security in advance and how you can implement it using the resource hierarchy of GCP. Network SecurityEvery VPC network in a project has its own set of firewall rules,and each firewall rule consists of seven components. Examples. IngressIngress rule,so it controls traffic coming into our project. EgressThis time it's an egress rule to control outgoing network traffic. You can also create hierarchical firewall policies, which are sets of rules deployed at the organization or folder level. VPC Service ControlsVPC Service Controls allows you to define a service perimeter around protected resources and enforce special rules at the border of this perimeter, which act independently of any existing IAM or firewall policies. Combining IAM firewall rules and VPC service controls will give you the most protection against unwanted data exfiltration. Summary:Use custom service identities and service accounts, and grant granular permissions to only the resources required for your apps. Limit the blast radius of any potential attack. ResourcesIAM Concepts |
Application Performance MonitoringOperations SuiteCloud LoggingThere are many different types of logs available: Platform logs -- logs emitted by Google managed services, such as Cloud SQL and Cloud Run. Logs are stored in buckets, and retention is based on a couple of factors. Log RoutingAll of these log entries, regardless of where they come from, go through the Cloud Logging API Logs in the Cloud Logging API are sorted by the logs router. The logs router can be configured to send logs to sinks. For example, the default log sink will direct logs to the default logs cloud storage bucket, or custom sinks can be created to send logs to BigQuery, Pub/Sub, or other storage buckets. Log ViewingMost parts of the console have embedded logs via a "Logs" tab. The Logs Explorer easily allows you to build a query using the log fields on the left and visualize the results in a histogram view on the right. You can use the query builder to save queries and run them later. Cloud MonitoringWorkspacesA workspace is the first thing you set up, and it's a single place for monitoring resources, not just in your own project, Resources and their metricsResources are the hardware or software component being monitored -- for example, a Compute Engine disk or instance, a Cloud SQL instance, or a Cloud Tasks queue, or any one of dozens of different resources that can be created in GCP. Custom MetricsIf you can't find the data you need with the thousands of built-in metrics offered by GCP, you can create your own custom metrics. For example, you have an app running in Compute Engine that makes calls to a third-party URL as part of its logic. You want to record the latency of those calls and log them as a time series metric, so that later you can set up an alerting policy to let you know if something is going wrong. We can easily add some code to our app to record that latency, but then the fun part is recording it as a metric for Cloud Monitoring. There's quite a few moving parts to how custom metrics work, but you only really need to understand it from the theoretical point of view Logs-based MetricsYou can also use logs-based metrics with Cloud Monitoring, which are simply metrics that originate in log entries. Lab Install and Configure Monitoring Agent with Google Cloud MonitoringThis lab will guide you in the process of installing the optional Monitoring agent on a pre-created Apache web server. Installing and configuring the Monitoring agent allows us to collect more detailed metrics that would not normally be possible, such as memory utilization and application-specific metrics. In this case, we will collect metrics from our Apache web server application. Initialize Monitoring Workspace
Install Monitoring Agent on Compute Engine Instance
Open the Demonstration Web Page and Generate Some Traffic
Confirm Agent Installation and Apache Configuration Success in Monitoring Workspace
Uptime ChecksAn uptime check is simply an HTTP request made by the monitoring platform to your app. Uptime checks work on any HTTP URL, providing they have a fully qualified domain name. It's just looking for a successful HTTP response, but it can check for expired SSL certificates and optionally it can use custom headers and basic authentication. Cloud Error ReportingCloud Error Reporting is powered by Cloud Logging, that will aggregate occurrences of any errors and provide an easy-to-read stack trace through a dedicated interface, which will hopefully reveal the bug. Lab Real-Time Troubleshooting with Google Cloud Error ReportingYou have been tasked with deploying your team's application to App Engine, as a proof-of-concept demo for Platform-as-a-Service technologies that you will present to the rest of your organization. The app will work just great — most of the time. For some reason, it also seems to return an internal server error, although you can't see any bugs in your code when you run the app locally. In this lab, we will use GCP Error Reporting to see live errors and stack traces from our deployed application, to identify where the error is occurring. Deploy the Demo Application
Use Cloud Error Reporting
Fix the Application and Redeploy
Cloud DebuggerSnapshot Break PointsIt's common practice when debugging applications to use snapshots. Depending on your development environment, Log PointsAdding a log point, a piece of logging code within the application code, that will log the value variables etc. to Cloud Logging. This logging code will expire in 24 hours. Cloud TraceModern application stacks are often distributed across multiple layers where a frontend service relies on a middleware service, which in turn relies on a backend service. ResourcesCloud Logging Overview |
Exam PrepGoogle References:App Enginehttps://cloud.google.com/appengine/docs/the-appengine-environments Big Queryhttps://cloud.google.com/bigquery/docs/running-queries#batch Cloud Buildhttps://cloud.google.com/cloud-build/docs/configuring-builds/use-community-and-custom-builders#creating_a_custom_builder Cloud Codehttps://cloud.google.com/code/docs Cloud Functionhttps://cloud.google.com/functions/docs/troubleshooting Cloud KMShttps://cloud.google.com/kms/docs/separation-of-duties#using_separate_project Cloud ProfilerTo diagnose the performance problem of your application running slower on a Compute Engine instance compared to when it is tested locally, you should use Cloud Profiler. Cloud Profiler is a service that allows you to analyze the performance of your application by providing detailed information on where the application is spending the most time. Cloud Runhttps://cloud.google.com/blog/products/serverless/knative-based-cloud-run-services-are-ga Cloud Spannerhttps://cloud.google.com/spanner/docs/data-types Cloud Security Scannerhttps://cloud.google.com/security-command-center/docs/concepts-web-security-scanner-overview Cloud Storagehttps://cloud.google.com/blog/products/storage-data-transfer/uploading-images-directly-to-cloud-storage-by-using-signed-url Compute Enginehttps://cloud.google.com/compute/docs/instance-groups/updating-migs#opportunistic_updates DataFlowhttps://cloud.google.com/dataflow/docs/concepts/streaming-with-cloud-pubsub Dockerhttps://cloud.google.com/build/docs/optimize-builds/speeding-up-builds#using_a_cached_docker_image Firebasehttps://firebase.google.com/docs/firestore/manage-data/enable-offline Firestorehttps://cloud.google.com/datastore/docs/firestore-or-datastore GKEhttps://cloud.google.com/istio/docs/istio-on-gke/overview Logginghttps://cloud.google.com/logging/docs/routing/overview#logs-retention MicroservicesMiscellaneoushttps://cloud.google.com/architecture/modernization-path-dotnet-applications-google-cloud#take_advantage_of_compute_engine Monitoringhttps://cloud.google.com/monitoring/alerts/concepts-indepth Stackdriver uptime checks periodically send HTTP or HTTPS requests to a specified URL and verify that the response is received and matches an expected pattern. If an error occurs, such as a timeout or a non-200 response code, it will trigger an alert. Pub/Subhttps://cloud.google.com/dataflow/docs/concepts/streaming-with-cloud-pubsub Scanninghttps://cloud.google.com/container-analysis/docs/automated-scanning-howto#view-code https://cloud.google.com/binary-authorization/docs Securityhttps://cloud.google.com/iap/docs/concepts-overview Testinghttps://cloud.google.com/architecture/distributed-load-testing-using-gke Troubleshootinghttps://cloud.google.com/compute/docs/troubleshooting/troubleshooting-using-serial-console |
Exam: Main Topics CoveredCloud Computing FundamentalsFor most applications, you need three core elements: compute, storage, and networking. Compute ServicesInfrastructure-as-a-Service, runs traditional IT infrastructure components that are offered as a service.Compute Engine The OS Login is the standard feature for GCP that allows to use Compute Engine IAM roles to manage SSH access to Linux instances. It is possible and easy to add an extra layer of security by setting up OS Login with two-factor authentication, and manage access at the organization level by setting up organization policies. What you have to do is: Enable 2FA for your Google account or domain. Enable 2FA on your project or instance. Grant the necessary IAM roles to the correct users. For any further detail: https://cloud.google.com/compute/docs/oslogin/setup-two-factor-authentication Compute Engine offers several types of storage options for your instances. Each of the following storage options has unique price and performance characteristics: Zonal persistent disk: Efficient, reliable block storage. Managed instance groups (MIGs) let you operate apps on multiple identical VMs. You can make your workloads scalable and highly available by taking advantage of automated MIG services, including: autoscaling, autohealing, regional (multiple zone) deployment, and automatic updating. Platform-as-a-ServiceApp Engine Each App Engine application is a top-level container that includes the service, version, and instance resources. Services: App Engine services behave like microservices. Therefore, you can run your whole app in a single service or you can design and deploy multiple services to run as a set of microservices. So you may divide a big app in mobile and web procedure and specialized backends. Very useful with microservices. Versions with versions each service, independently, may switch between different versions for rollbacks, testing, or other temporary events. You can route traffic to one or more specific versions of your app by migrating or splitting traffic. Instances The versions within your services run on one or more instances. By default, App Engine scales your app to match the load. Your apps will scale up the number of instances that are running to provide consistent performance, or scale down to minimize idle instances and reduces costs. For more information about instances, see How Instances are Managed. For any further detail: https://cloud.google.com/appengine/docs/standard/python/an-overview-of-app-engine App Engine, both Standard then Flex Edition, are specially suited for Building Microservices. In an App Engine Project you can use any mix of standard and flexible environment services, written in any language. In addition with Cloud Endpoints it is possible to deploy, protect, and monitor your APIs. Using an OpenAPI Specification or API frameworks, Cloud Endpoints gives tools for API development and provides insight with Stackdriver Monitoring, Trace, and Logging. Cloud Functions with https endpoint is not enough for enterprise integrated projects. Container-as-a-ServiceThese are self-contained software environments. For example, a container might include a complete application plus all of the third-party packages it needs. Containers are somewhat like virtual machines except they don’t include the operating system. This makes it easy to deploy them because they’re very lightweight compared to virtual machines. In fact, containers run on virtual machines. Cloud Run GKE
To deploy an application from a Kubernetes Deployment file use gcloud or Deployment Manager to create a cluster then use kubectl to create a deployment GKE’s cluster autoscaler automatically resizes the number of nodes in a given node pool, based on the demands of your workloads. You don’t need to manually add or remove nodes or over-provision your node pools. Instead, you specify a minimum and maximum size for the node pool, and the rest is automatic. If your node pool contains multiple managed instance groups with the same instance type, cluster autoscaler attempts to keep these managed instance group sizes balanced when scaling up. This can help prevent an uneven distribution of nodes among managed instance groups in multiple zones of a node pool. Cluster autoscaler considers the relative cost of the instance types in the various pools, and attempts to expand the least expensive possible node pool. The reduced cost of node pools containing preemptible VMs is taken into account. Vertical pod autoscaling (VPA) is a feature that can recommend values for CPU and memory requests and limits, or it can automatically update the values. With Vertical pod autoscaling: Cluster nodes are used efficiently, because Pods use exactly what they need. Pods are scheduled onto nodes that have the appropriate resources available. You don’t have to run time-consuming benchmarking tasks to determine the correct values for CPU and memory requests. Maintenance time is reduced, because the autoscaler can adjust CPU and memory requests over time without any action on your part. With GKE you don’t have to use the scalability features of Compute Engine. For any further detail: You may notice that pods are similar to Compute Engine managed instance groups. A key difference is that pods are for executing applications in containers and may be placed on various nodes in the cluster, while managed instance groups all execute the same application code on each of the nodes. Also, you typically manage instance groups yourself by executing commands in Cloud Console or through the command line. Pods are usually managed by a controller. What is Google Kubernetes Engine (GKE)? Function-as-a-ServiceCloud Function Storage Options (When is the right use case for each):FilesCloud Storage: flat unstructured The only solution for multi-regional object storage is Cloud Storage. In order to reach higher performances, the use of Cloud CDN is advisable. A Cloud Storage trigger enables a function to be called in response to changes in Cloud Storage. When you specify a Cloud Storage trigger for a function, you choose an event type and specify a Cloud Storage bucket. Your function will be called whenever a change occurs on an object (file) within the specified bucket.
Cloud Storage has a Text to BigQuery (Stream) pipeline that allows to stream text files stored in Cloud Storage, transform them using JavaScript into User Defined Function (UDF) that you provide, and output the result to BigQuery Filestore: Hierarchial, NSF compatible file shares DatabasesRelational, Transactional: Cloud SQL: fully managed, but hard to scale to high volume, high speed data In order to avoid any SPF: Single Point of Failures, you have to use a managed Database Service or manage a Replica. Cloud Spanner: massively scalable, but more expensive and re-write need if coming from a legacy SQL db Cloud Spanner provides a special kind of consistency, called external consistency. We are used to deal with strong consistency, that make possible that, after an update, all the queries will receive the same result. In other words the state of the Database is always consistent, no matter the distribution of the processing, partitions and replicas. The problem with a global, horizontal scalable DB as Spanner the transactions are executed in many distributed Instances and therefore, is really difficult to guarantee strong consistency. Spanner manage to achieve all that by means of TrueTime, a distributed clock in all GCP computing systems. With TrueTime, Spanner manages the serialization of transactions, achieving in this way out external consistency, that is the strictest concurrency-control for Databases. In Cloud Spanner it is necessary to be careful not to create hotspots with the choice of your primary key. For example, if you insert records with a monotonically increasing integer as the key, you’ll always insert at the end of your key space. This is undesirable because Cloud Spanner divides data among servers by key ranges, which means your inserts will be directed at a single server, creating a hotspot. The techniques that can spread the load across multiple servers and avoid hotspots: Hash the key and store it in a column. Use the hash column (or the hash column and the unique key columns together) as the primary key. Swap the order of the columns in the primary key. Use a Universally Unique Identifier (UUID). Version 4 UUID is recommended, because it uses random values in the high-order bits. Don’t use a UUID algorithm (such as version 1 UUID) that stores the timestamp in the high order bits. Bit-reverse sequential values. NoSQL: BigTable Cloud Bigtable is a sparsely populated table with 3 dimensions (row, column, time) that can scale to billions of rows and thousands of columns, enabling you to store terabytes or even petabytes of data and to access data at sub-millisecond latencies. A single value in each row is indexed; this value is known as the row key. Cloud Bigtable is ideal for storing very large amounts of single-keyed data with very low latency. It supports high read and write throughput at low latency, and it is an ideal data source for MapReduce operations. Each row is indexed by a single row key, and columns that are related to one another are typically grouped together into a column family. Each column is identified by a combination of the column family and a column qualifier, which is a unique name within the column family. Each row/column intersection can contain multiple cells, or versions, at different timestamps, providing a record of how the stored data has been altered over time. Cloud Bigtable tables are sparse; if a cell does not contain any data, it does not take up any space. Cloud Bigtable scales in direct proportion to the number of machines in your cluster without any bottleneck. Datastore\Firestore A Datastore manages relationships between entities (records), in a hierarchically structured space similar to the directory structure of a file system. When you create an entity, you can optionally designate another entity as its parent; the new entity is a child of the parent entity. An entity without a parent is a root entity. A transaction is a set of Datastore operations on one or more entities in up to 25 entity groups. Each transaction is guaranteed to be atomic, which means that transactions are never partially applied. Either all of the operations in the transaction are applied, or none of them are applied. Datawarehouse NetworkingVPCWhen you create a virtual machine on GCP, you have to put it in a Virtual Private Cloud, or VPC. A VPC is very similar to an on-premises network. Each virtual machine in a VPC gets an IP address, and it can communicate with other VMs in the same VPC. You can also divide a VPC into subnets and define routes to specify how traffic should flow between them. By default, all outbound traffic from a VM to the Internet is allowed. If you also want to allow inbound traffic, then you need to assign an external IP address to the VM. If you want VMs in one VPC to be able to communicate with VMs in another VPC, then you can connect the VPCs together using VPC Network Peering. If you want to create a secure connection between a VPC and an on-premises network, then you can use Cloud VPN, which stands for Virtual Private Network, Cloud Interconnect, or Peering. A VPN sends encrypted traffic over the public Internet, whereas Cloud Interconnect and Peering communicate over a private, dedicated connection between your site and Google’s network. Cloud Interconnect is much more expensive than a VPN, but it provides higher speed and reliability since it’s a dedicated connection. Peering is free, but it’s not well-integrated with GCP, so you should usually use Cloud Interconnect instead. Global networking servicesCDNOne way to make your web applications respond more quickly to your customers is to use a Content Delivery Network. Google offers Cloud CDN for this purpose. It caches your content on Google’s global network, which reduces the time it takes for your users to retrieve it, no matter where they’re located in the world. This is especially important if your content includes videos. Cloud Load BalancingTo make sure your application continues to be responsive when there’s a sudden increase in traffic, or even if one of Google’s data centers fails, you can use Cloud Load Balancing. It redirects application traffic to groups of VM instances distributed in different locations, and it can automatically scale the number of instances up or down as needed. All of this complexity is hidden behind a single IP address. Cloud ArmorLoad Balancing works well for normal increases in network traffic, but what about when you’re hit by a Distributed Denial of Service, or DDoS, attack? You can use Cloud Armor, which integrates with Cloud Load Balancing. IAMThe most important layers of security in GCP is IAM, which stands for Identity and Access Management. Since identity is handled using an outside service, such as Cloud Identity or even Google accounts, IAM is really about access management. It lets you assign roles to users and applications. A role grants specific permissions, such as being able to create a VM instance. EncryptionAnother important security area is encryption. GCP handles this very well because everything is encrypted by default. However, many organizations need to manage the encryption keys that are used to encrypt their data, especially to comply with certain security standards. Google provides Cloud Key Management Service to allow your organization to centrally manage your encryption keys and integrating the services related to encryption keys for other Google cloud services that enterprises can use to implement cryptographic functions. A similar service is Secret Manager, which is a central place to store your API keys, passwords, certificates, and other secrets. https://cloud.google.com/kms/ https://cloud.google.com/secret-manager/docs/ HSM is a physical computing device that stores and manages digital keys for strong authentication and provides crypto-processing. They usually plug-in cards or external devices that are attached directly to a computer or network server. Cloud HSM is a managed service for HSM and it is fully integrated with KMS for creating and using customer-managed encryption keys. It is necessary only in special cases where an hardware enforced additional level of security is required. https://cloud.google.com/hsm/ Finally, the Data Loss Prevention service helps you protect sensitive data. Interacting with GCPThere are many ways to interact with GCP. The Google Cloud Console runs in a browser, so you don’t need to install anything to use it. Alternatively, you can install the SDK, which stands for Software Development Kit. The SDK includes two types of tools. The first is what you’d expect in an SDK: a collection of client libraries that your applications can use to interact with GCP services. The second is a set of command-line tools, including gcloud, gsutil, bq, and kubectl. The one you’ll use the most is gcloud, which is for managing all services other than Cloud Storage, BigQuery, and Kubernetes. it’s pretty easy to use the Google Cloud Console to create GCP resources, but if you know how to use the command-line interface, you can usually create resources more quickly, often with a single command. For example, to create a virtual machine called instance-1 in the us-central1-a zone, all you need to do is type, “gcloud compute instances create instance-1 --zone=us-central1-a”. This will create the instance using defaults for everything. To specify particular options, you can just add them to the command. The Cloud Shell, which is a very small virtual machine that you can use to run commands. It already has the Cloud SDK installed on it, so we don’t need to. Migrating to GCPMigrate a VMWare Migrate for Compute Engine Migrate for Anthos Migrate Large Data Migrate Active Directory Data AnalyticsGoogle offers so many services in this data analytics that can be divided into Ingest, Store, Process, and Visualize. IngestThere are lots of ways to ingest data, but if you have a large amount of data streaming in, then you’ll likely need to use Pub/Sub. It essentially acts as a buffer for services that may not be able to handle such large spikes of incoming data. StoreIn the Store category, the main option for interactive analytics is Big Query, that is, running queries on your data. If you need high-speed automated analytics, then Bigtable is usually the right choice. ProcessThe Process category is where Google has the most options. These services are used to clean and transform data. If you already have Hadoop or Spark-based code, then you can use Dataproc, which is a managed implementation of Hadoop and Spark. Alternatively, if you already have Apache Beam-based code, then you can use Dataflow. If you’re starting from scratch, you might want to choose Dataflow because Apache Beam has some advantages over Hadoop and Spark. If you’d like to do data processing without writing any code, you can use Dataprep, which actually uses Dataflow under the hood. VisualizeTo visualize or present your data with graphs, charts, etc., you can use Data Studio or Looker. Data Studio was Google’s original visualization solution, but then Google acquired Looker, which is a more sophisticated business intelligence platform. One big difference is that Data Studio is free, but Looker isn’t. So, if you need to do simple reporting, then Data Studio should be fine, but if you want to do something more complex, then Looker is your best bet. Processing pipelinesCloud Composer/Data Fusion IoT Core Deployment StrategiesDevOpsDevOps services help you automate the building, testing, and releasing of application updates. Cloud BuildThe most important DevOps tool is Cloud Build. It lets you create continuous integration / continuous deployment pipelines. A Cloud Build can define workflows for building, testing, and deploying across multiple environments such as VMs, serverless, Kubernetes, or Firebase. Cloud Build integrates with third-party code repositories, such as Bitbucket and GitHub, but you may want to use Google’s Cloud Source Repositories, which are private Git repositories hosted on GCP. If you’re deploying your applications using containers, then you can configure Cloud Build to put the code into a container and push it to Artifact Registry, which is a private Docker image store hosted on GCP. A Cloud Build provides a gke-deploy builder that enables you to deploy a containerized application to a GKE cluster. gke-deploy is a wrapper around kubectl, the command-line interface for Kubernetes. It applies Google’s recommended practices for deploying applications to Kubernetes by: Updating the application’s Kubernetes configuration to use the container image’s digest instead of a tag. Adding recommended labels to the Kubernetes configuration. Retrieving credentials for the GKE clusters to which you’re deploying the image. Waiting for the Kubernetes configuration that was submitted to be ready. If you want to deploy your applications using kubectl directly and do not need additional functionality, Cloud Build also provides a kubectl builder that you can use to deploy your application to a GKE cluster. A/B Testing Troubleshooting ApplicationsCloud Operations SuiteOnce you’ve deployed applications on GCP, you’ll need to maintain them. Google provides many services to help with that. One of the most important is the Cloud Operations suite, which was formerly known as Stackdriver. Cloud Monitoring & Cloud LoggingCloud Monitoring gives you a great overview of what’s happening with all of your resources. By default, it provides graphs showing metrics like CPU utilization, response latency, and network traffic. You can also create your own custom graphs and dashboards. But an even more critical feature is that you can set up alerts to notify you if there are problems. For example, you can set up an uptime check that alerts you if a virtual machine goes down. The suite also includes Error Reporting, Cloud Trace, Cloud Debugger, and Cloud Profiler to debug live applications and track down performance problems. Security Command CenterIn addition to monitoring performance, you’ll also need to monitor security and compliance. Security Command Center gathers this information in one place. Its overview dashboard shows you active threats and vulnerabilities, ordered by severity. For example, if one of your applications is vulnerable to cross-site scripting attacks, then that vulnerability will show up in the list. Security Command Center also includes a compliance dashboard that lets you know about violations of compliance standards, such as PCI-DSS, in your GCP environment. Cloud Deployment Manager.Cloud Deployment Manager is Google’s solution to automated resource creation. To use it, you create a configuration file with all the details of the GCP resources you want to create, and then you feed it to Cloud Deployment Manager. What makes it really powerful is that you can define the configuration of multiple, interconnected resources, such as two VM instances and a Cloud SQL database. Then you can deploy all of them at once. SecurityServersGoogle server machines use a variety of technologies to ensure that they are booting the correct software stack. We use cryptographic signatures over low-level components like the BIOS, bootloader, kernel, and base operating system image. These signatures can be validated during each boot or update. The components are all Google-controlled, built, and hardened. With each new generation of hardware we strive to continually improve security: for example, depending on the generation of server design, we root the trust of the boot chain in either a lockable firmware chip, a microcontroller running Google-written security code, or the above mentioned Google-designed security chip. ServicesEach service that runs on the infrastructure has an associated service account identity. A service is provided cryptographic credentials that it can use to prove its identity when making or receiving remote procedure calls (RPCs) to other services. These identities are used by clients to ensure that they are talking to the correct intended server, and by servers to limit access to methods and data to particular clients. GFEWhen a service wants to make itself available on the Internet, it can register itself with an infrastructure service called the Google Front End (GFE). The GFE ensures that all TLS connections are terminated using correct certificates and following best practices such as supporting perfect forward secrecy. The GFE additionally applies protections against Denial of Service attacks (which we will discuss in more detail later). The GFE then forwards requests for the service using the RPC security protocol discussed previously. Microservice Architecture IAM / Security Best Practices Principle of least privilege Pub/Sub best practices |
Prep Exam 2023161 pages...of goodness!?! |
Google Cloud – HipLocal Case StudyHipLocal is a community application designed to facilitate communication between people in close proximity. It is used for event planning and organizing sporting events, and for businesses to connect with their local communities. HipLocal launched recently in a few neighborhoods in Dallas and is rapidly growing into a global phenomenon. Its unique style of hyper-local community communication and business outreach is in demand around the world. HipLocal Solution ConceptHipLocal wants to expand their existing service with updated functionality in new locations to better serve their global customers. They want to hire and train a new team to support these locations in their time zones. They will need to ensure that the application scales smoothly and provides clear uptime data, and that they analyze and respond to any issues that occur. Key points here are HipLocal wants to expand globally, with an ability to scale and provide clear observability, alerting and ability to react. HipLocal Existing Technical EnvironmentHipLocal’s environment is a mixture of on-premises hardware and infrastructure running in Google Cloud. The HipLocal team understands their application well, but has limited experience in globally scaled applications. Their existing technical environment is as follows:
Business requirementsHipLocal’s investors want to expand their footprint and support the increase in demand they are experiencing. Their requirements are:
Technical requirements
GCP Certification Exam Practice Questions
HipLocal's .net-based auth service fails under intermittent load. What should they do? Reference |
Resource List:
Cloud Developer:
Training:
BALLARD:
GABRIEL:
JOSH:
Google Partner Skills boost
Awesome list
SHON:
Course: A Cloud Guru
Exams:
Udemy
Exam Topics
seems very good cloudacademy
https://googlecloudcheatsheet.withgoogle.com/
Practice:
Google
ITExams
Exam:
Online Proctored Certification Testing
Webassessor Exam Registration
The text was updated successfully, but these errors were encountered: