A service mesh is a dedicated infrastructure layer for handling service-to-service communication. It’s responsible for the reliable delivery of requests through the complex topology of services that comprise a modern, cloud native application.
Service mesh solutions have two distinct components that behave somewhat differently:
The data plane is composed of a set of intelligent proxies (Envoy) deployed as sidecars. These proxies mediate and control all network communication between microservices along with Mixer, a general-purpose policy and telemetry hub.
The control plane manages and configures the proxies to route traffic. …
Kubeflow provides a simple, portable, and scalable way of running Machine Learning workloads on Kubernetes.
In this module, we will install Kubeflow on Amazon EKS, run a single-node training and inference using TensorFlow, train and deploy model locally and remotely using Fairing, setup Kubeflow pipeline and review how to call AWS managed services such as Sagemaker for training and inference.
We need more resources for completing this chapter of the EKS Workshop. First, we’ll increase the size of our cluster to 6 nodes
export NODEGROUP_NAME=$(eksctl get nodegroups --cluster eksworkshop-eksctl -o json | jq -r ‘.[0].Name’) …
This document will help you to try out EKS/Fargate deployment in a Cloud environment.
Create Cloud9 environment and increase the disk size on the Cloud9 instance.
pip3.7 install — user — upgrade boto3
export instance_id=$(curl -s http://169.254.169.254/latest/meta-data/instance-id)
python -c “import boto3
import os
from botocore.exceptions import ClientError
ec2 = boto3.client(‘ec2’)
volume_info = ec2.describe_volumes(
Filters=[
{
‘Name’: ‘attachment.instance-id’,
‘Values’: [
os.getenv(‘instance_id’)
]
}
]
)
volume_id = volume_info[‘Volumes’][0][‘VolumeId’]
try:
resize = ec2.modify_volume(
VolumeId=volume_id,
Size=30
)
print(resize)
except ClientError as e:
if e.response[‘Error’][‘Code’] == ‘InvalidParameterValue’:
print(‘ERROR MESSAGE: {}’.format(e))”
if [ $? -eq 0 ]; then
sudo reboot
fi
Note: The above command…
TCP: 53, 135, 389, 445 ,464, 636, 3268, 3269, 49152–65535
UDP: 53, 88, 135, 389, 445, 464, 636, 3268, 3269, 123, 137, 138
2. You can dump the entire Event to cloudwatch log by adding the following line in your function→ convert event to json string with 4 space indentation.
console.log(‘Received event:’, JSON.stringify(event, null, 4));
3. For end to end testing, ensure you rename the target file in S3 that your browser points to. This ensures no issues with caching on browser or lambda@edge side that could cause the lambda function not being invoked.
4. The console.log output of the lambda function…
https://qualysguard.qualys.com/am/help/sensors/cloud_agent.htm
2. Perform installation with the following command:
sudo rpm -ivh qualys-cloud-agent.x86_64.rpm
3. Activate Qualys Cloud Agent with the following command:
sudo /usr/local/qualys/cloud-agent/bin/qualys-cloud-agent.sh ActivationId=1032h37-dd20-4fde-93c8-2q3dwedae34
CustomerId=aa9845nb0-6643-5564-8045-1234wsDDAS
4. Update Qualys Proxy setting and restart the service.
echo "qualys_https_proxy=\"http://<proxy-url>:1080\"" > /etc/sysconfig/qualys-cloud-agentsystemctl restart qualys-cloud-agent
5. For troubleshooting -> log path in Linux environment
/var/log/qualys/qualys-cloud-agent.log
https://qualysguard.qualys.com/am/help/sensors/cloud_agent.htm
2. Perform installation with the following command:
sudo dpkg --install QualysCloudAgent.deb
3. Activate Qualys Cloud Agent with the following command:
sudo /usr/local/qualys/cloud-agent/bin/qualys-cloud-agent.sh ActivationId=10feb827-dd20-4fde-93c8-q234dasds CustomerId=aa41eda0-6643-5564-8045-23edsdsdDs…