Choose color scheme

Tag Archives: aws

  • Amazon Web Services (AWS): Data Transfer Options

    Amazon Web Services (AWS): Data Transfer Options

    Amazon Web Services (AWS) is the market leader in providing infrastructure and applications for Cloud Based workloads.

    One of the critical needs for cloud based deployments is data that resides in your data center or enterprise premises. This data needs to be transferred to AWS.

    The following are the options available to you to transfer data from on premises to AWS cloud:

    • Direct Connect
    • S3 Upload (Internet Upload)
    • Snowball

    Let us look deeper into each of these options.

    Direct Connect

    In this option, AWS provides you with a dedicated network connection between your data center or office and AWS.

    The reasons you would go for Direct Connect are:

    • Lower bandwidth costs when you have large amounts of data
    • Consistent network speeds compared to regular internet
    • There are many direct connect centers close to the AWS regions.

    If you do not reside close to the direct connect centers, then you can make use of AWS partners (https://aws.amazon.com/directconnect/partners/) to get a dedicated network connection from your data center to the AWS center.

    You get an option of either 1GB or 10GB uplink connectivity.

    S3 Upload

    AWS S3 provides a reliable and redundant object store for your applications.

    The primary speed up in upload of S3 objects via the Internet is the Multipart upload. AWS S3 multipart upload intelligently breaks the objects into multiple parts and uploads them in parallel.

    S3 Transfer Acceleration is a service that can help you transfer objects at a much faster rate to regions that are across continents. You will need to enable acceleration at the S3 bucket level and there is a change in URL for the upload location. Please see reference section below.

    Snowball Transfer

    Amazon provides rugged snowball machines that you can request (1 business day delivery by UPS).

    Each snowball machine can either store 50TB or 80TB. They contain a Kindle for address and courier labeling. The machine has a TPM chip for storing sensitive information such as encryption metadata or pseudo keys or such. The data is encrypted with AES256.

    The entire process takes about 7-10 days. Pricing is $200 for 50TB and $250 for 80TB. Shipping costs are extra and billed by UPS.

    Reference

    https://www.linkedin.com/pulse/amazon-web-services-aws-data-transfer-options-anil-saldanha?trk=mp-author-card

    https://aws.amazon.com/blogs/aws/aws-storage-update-amazon-s3-transfer-acceleration-larger-snowballs-in-more-regions/

  • Amazon Web Services Data Security

    Amazon Web Services Data Security

    AWS provides many options to encrypt data that you put it in the cloud.

    Some of the options include:

    1. Client Side Encryption
    2. Server Side Encryption

    Client Side Encryption

    Client Side Encryption refers to encrypting the data before you put it in the AWS Cloud. In this case, you can either manage your own key or use AWS Key Management System (KMS) key.

    Server Side Encryption

    Server Side Encryption refers to AWS encrypting data as it is written into the cloud. Here you have the choice of providing your own key or AWS KMS managed key or AWS S3 managed key.

    If you require high levels of confidentiality for your data, I suggest the following:

    • Create a Customer Master Key (CMK) in a region.
    • Provide the CMK to AWS API and it will create a data key server side

    The CMK can only encrypt up to 4kb of data. Hence it is perfect to encrypt the data key. The data key has no size restrictions.

    Using CMK with Server Side Encryption is a good solution to confidentiality needs in AWS.

  • Amazon Web Services (AWS) Redshift queries are slow

    Amazon Web Services (AWS) Redshift queries are slow

    Never forget the golden rule of AWS Redshift.

    “Whenever you add, delete, or modify a significant number of rows, you should run a VACUUM command and then an ANALYZE command.”

    This will speed up your queries.

    Vacuum command will clear any deleted space and tune the database.

  • Amazon Web Services (AWS) Managed Elasticsearch

    Amazon Web Services (AWS) Managed Elasticsearch

    AWS provides a managed elasticsearch cluster. It is very useful to create a cluster quickly and scale the cluster as demand goes up and down seamlessly.

    Adding nodes or changing the size of the cluster does not take a lot of time. AWS handles the cluster resize very cleanly.

    Managed elasticsearch is on v1.5.2 as of January 2016. Even though community Elasticsearch is at v2.1.x, AWS is still at v1.5.2. For most of the projects, this is not a problem. Unless you have advanced Elasticsearch needs, the managed elasticsearch infrastructure on AWS should be sufficient.

    AWS managed Elasticsearch does not support the TCP client. So you cannot use the Elasticsearch client API to write your applications. Managed Elasticsearch in AWS only exposes the HTTP client which will be https on port 443.

    For Java applications, you can use the open source Jest API.