Cloud img
WEBINAR

Cloud Migration Strategies & Assessment

Access Session Recording & Notes

FAQs

What would be my very first step if I want to save costs on the cloud?

We would recommend taking a step wise approach – Look for some easy fixes, and then fixes that may need some re-architecting.

You can optimize costs from either a strategic or a tactical point of view:

Strategic:

  • Understand usage in terms of traffic, and transactions for the resources you are planning to use on the Cloud
  • Look at multiple options and review the pros and cons
  • Benchmark the performance and the costs
  • Rollout

Tactical:

  • Review the resource utilization metrics
  • Look for alternative options in the Cloud. For example, if you are running your code intermittently on a virtual machine, you might consider a serverless architecture
  • Start with smaller specifications just enough for your performance metrics and then increase capacity

How does moving to the cloud help protect the business through businesses, and ensure continuity and disaster recovery?

All Major cloud providers give options to launch instances in multiple geographical locations. When we are designing an architecture, we can take advantage of this to get a highly available system or design. When it comes to DR – we don’t have to pre-provision systems (like we used to traditionally). We can make use of infra as code, and within minutes provision servers and other components on any cloud.

There are multiple cost-effective options to provision cloud infra, especially when you use infrastructure as a code.

Is the cloud safe for personal information especially with GDPR in place?

Cloud datacenters and infra are already compliant to security requirements like PCI etc. But it is the responsibility of the organization to make sure they are designing highly secure architectures.

For example, you have to encrypt your data at rest and maintain proper access controls.

Where transaction tokens are involved, you can use key management features provided by security vendors like RSA or Vormetric, or use cloud native key management services. There are tools available, but it is up to us to make sure we put proper checks in place to make data secure.

How do we measure financial gains?

We can look for the Capex and Opex costs pre and post migration. We can also measure increase in availability of application / reduction in maintenance hours / outages.

There are four ways to measure financial gains:

  • Efficiency in data center operations and effort reduction
  • Cost savings in terms of moving from your on-premise data center to the Cloud
  • Increased availability of your systems
  • Avoidance of a disaster
What are the hidden costs of the cloud and how can we avoid them?

One of the advantages of the Cloud, is that there are no hidden costs. You would know upfront about the set-up cost involved for the workload. You might also want to consider data charges when calculating or estimating the cost. Data charges will vary considerably based on where the data sources and destinations are. It is best to review options well, and then move to implementation. The best approach is to benchmark applications on performance, cost and your projections.

Calculating costs could be a tricky area. For example, for serverless components – you might want to consider transactions, size of the data, concurrency, compute time etc.

With the constant changes in cloud pricing, how do we know which tool and pricing would be accurate?

We always refer the pricing calculators before provisioning infrastructure. Benchmarking ensures that the costs become even more predictable.

Do we need to refactor an application before migrating it to the cloud or can it be moved as-is?

It depends on the application. If there are no dependencies – a lift and shift would be possible. Spending time on refactoring could have several gains too. These could be in terms of cost / increased availability / less maintenance etc. Sometimes, lift and shift may not be the ideal solution, even if it is the quickest. It is advisable to spend some time on assessing options and only then going ahead with a plan.

What factors other than cost optimization come into play while monitoring a cloud environment?

Other than cost, factors such as increased system availability, increased performance, reduced maintenance through automation, and more come into play while monitoring a cloud environment.

What are the security concerns associated with the cloud?
  • Security considerations on Cloud are similar to those with traditional systems – we have to invest in understanding how & what cloud offers as tools towards security. It is important to understand how to safeguard data and other resources on the cloud.
  • Most concerns with cloud security arise due to gaps in skills and training, rather than with Cloud technology itself.

That’s the reason we always advise organizations to build reference architectures and have them certified by security teams before deploying applications.

When it comes to reducing costs on the Cloud, what are the low hanging fruits?

Some of the low hanging fruits include:

  • Optimizing virtual machine resources, and wherever possible, go for long term commitments on VMs
  • Look for applications which can be moved to serverless architectures
  • Use cloud native services wherever possible for cost savings
  • Have automation in place to turn-off or tear down resources that aren’t needed
Why Snowflake and Why not Redshift? Redshift is much less expensive than Snowflake? If it's not cost-optimization, what was the strategy?

We evaluated both Redshift and Snowflake. For some of our specific use cases, Snowflake performed much better than Redshift, and due to the nature of the usage, the costs with Snowflake were almost similar to those with Redshift. Snowflake also has the added advantage of being cloud agnostic.

What tools do you use to scan containers?

“Docker bench”, Clair, Anchore etc. are examples of tools one can use for container scanning.

For early startups in the data analytics and ML space. What cloud services trends and products come on the top?

The choice of the tool would depend on the customer’s environment and the cloud provider being used.
One has to analyze the requirements in detail to make the best choice.
Some of the top services in the ML space include AWS Sagemaker, AWS Textract or AWS Lex (for Chatbots).
We have also seen use of Azure stack solutions like Azure Synapse, QnA maker and Computervision.
While these are services from each of the cloud providers, we have also built our own vendor agnostic services for each of these areas, which help you port and migrate the solution onto any of these cloud providers should there be cost benefits in the future.

What is the typical engagement time frame for Cloud optimization?

We follow a consultative approach in such scenarios. A two week study of the current landscape tells us the approach we should take to gain maximum optimization. Thereon, it could take around two weeks to frame a proposal for optimization depending on the size of the estate.

What is your take on having a hybrid solution when not all of your application stack is ready for cloud?

For various reasons, many of our customers prefer a hybrid approach. It could be due to compliance concerns, or due to technical limitations. Establishing a secure connection (direct connect, for example) back to datacenters is a very popular method of hybridization. We can split the applications and run some workloads on the cloud while maintaining some on-prem. If it is a technical limitation, we can also look at re-architecting – of course we will have to make sure that such move would be beneficial to the organization.

Devsecops covers - code analysis, compliance monitoring, threat investigation, vulnerability assessment. I believe you have addressed-only code analysis and partially vulnerability. What about the rest?

We have implemented security capabilities at various levels like code analysis, compliance monitoring, vulnerability assessment and threat intelligence.
We cover these areas using various cloud native and vendor agnostic tools such as:

  • Vulnerability Assessment – Qualys or AWS Inspector (for AWS Setup)
  • Open Source Vulnerability assessment – Snyk or Whitesource
  • Code Analysis – SonarQube
  • Threat Intelligence – AWS Guardduty and others
  • Compliance Monitoring – AWS Inspector, GuardDuty, AWS Systems Manager or Azure Security Center
  • Threat Modeling – Microsoft Threat Modeling tool
What is the best migration method (ex:gsutil or awscli) to migrate S3 data close to 1PB from AWS to GCP w/o using cloud native tools (AWS snowball or GCP STS )?

We might need to delve deeper into this particular use-case to answer it accurately. A few questions that come to mind are whether we can compress the data? And if not, can we logically split that data into smaller chunks?
GCP Storage Transfer Service is a recommended method. But in case there are any reservations then we can consider gsutil.
Given that the volume of data is large, it’s best to benchmark any cost impact during the transfer, and then opt for migration. It would also depend on factors like age of the data, number of times the data is being used etc.

When we migrate to cloud, the sensitivity of having PII data in public cloud shoots up. So, could you name one of the data masking solutions that you have employed to protect PII data in non-prod cloud environments?

We have created our own tools to simulate data for testing etc. Our tool, called the Test Data Hub, mimics and creates data that will maintain a data structure similar to that of the actual data. This can be used to perform tests and simulations.