back to top

Cloud

Our Cloud experts help you find the best solutions tailored to your business' needs, structure and budget. We ensure low-risk seamless migration to Cloud with built-in security. And deliver excellence by helping you create your Cloud strategy to optimize costs.

Our Capabilities

Security and
Compliance

Migration and
Automation

Operational
Integration

Cloud Strategy
Optimization

Big Data
and IoT

Advisory and IT
Transformation

Case Studies

FAQs

We would recommend taking a step wise approach – Look for some easy fixes, and then fixes that may need some re-architecting. You can optimize costs from either a strategic or a tactical point of view:
Strategic:
  • Understand usage in terms of traffic, and transactions for the resources you are planning to use on the Cloud
  • Look at multiple options and review the pros and cons
  • Benchmark the performance and the costs
  • Rollout
Tactical:
  • Review the resource utilization metrics
  • Look for alternative options in the Cloud. For example, if you are running your code intermittently on a virtual machine, you might consider a serverless architecture
  • Start with smaller specifications just enough for your performance metrics and then increase capacity
All Major cloud providers give options to launch instances in multiple geographical locations. When we are designing an architecture, we can take advantage of this to get a highly available system or design. When it comes to DR – we don’t have to pre-provision systems (like we used to traditionally). We can make use of infra as code, and within minutes provision servers and other components on any cloud.
There are multiple cost-effective options to provision cloud infra, especially when you use infrastructure as a code.
Cloud datacenters and infra are already compliant to security requirements like PCI etc. But it is the responsibility of the organization to make sure they are designing highly secure architectures.
For example, you have to encrypt your data at rest and maintain proper access controls.
Where transaction tokens are involved, you can use key management features provided by security vendors like RSA or Vormetric, or use cloud native key management services. There are tools available, but it is up to us to make sure we put proper checks in place to make data secure.
We can look for the Capex and Opex costs pre and post migration. We can also measure increase in availability of application / reduction in maintenance hours / outages.
There are four ways to measure financial gains:
  • Efficiency in data center operations and effort reduction
  • Cost savings in terms of moving from your on-premise data center to the Cloud
  • Increased availability of your systems
  • Avoidance of a disaster
One of the advantages of the Cloud, is that there are no hidden costs. You would know upfront about the set-up cost involved for the workload. You might also want to consider data charges when calculating or estimating the cost. Data charges will vary considerably based on where the data sources and destinations are. It is best to review options well, and then move to implementation. The best approach is to benchmark applications on performance, cost and your projections.
Calculating costs could be a tricky area. For example, for serverless components – you might want to consider transactions, size of the data, concurrency, compute time etc.
We always refer the pricing calculators before provisioning infrastructure. Benchmarking ensures that the costs become even more predictable.
It depends on the application. If there are no dependencies – a lift and shift would be possible. Spending time on refactoring could have several gains too. These could be in terms of cost / increased availability / less maintenance etc. Sometimes, lift and shift may not be the ideal solution, even if it is the quickest. It is advisable to spend some time on assessing options and only then going ahead with a plan.
Other than cost, factors such as increased system availability, increased performance, reduced maintenance through automation, and more come into play while monitoring a cloud environment.
  • Security considerations on Cloud are similar to those with traditional systems – we have to invest in understanding how & what cloud offers as tools towards security. It is important to understand how to safeguard data and other resources on the cloud.
  • Most concerns with cloud security arise due to gaps in skills and training, rather than with Cloud technology itself.
That’s the reason we always advise organizations to build reference architectures and have them certified by security teams before deploying applications.
Some of the low hanging fruits include:
  • Optimizing virtual machine resources, and wherever possible, go for long term commitments on VMs
  • Look for applications which can be moved to serverless architectures
  • Use cloud native services wherever possible for cost savings
  • Have automation in place to turn-off or tear down resources that aren’t needed
We evaluated both Redshift and Snowflake. For some of our specific use cases, Snowflake performed much better than Redshift, and due to the nature of the usage, the costs with Snowflake were almost similar to those with Redshift. Snowflake also has the added advantage of being cloud agnostic.
“Docker bench”, Clair, Anchore etc. are examples of tools one can use for container scanning.
The choice of the tool would depend on the customer’s environment and the cloud provider being used.
One has to analyze the requirements in detail to make the best choice.
Some of the top services in the ML space include AWS Sagemaker, AWS Textract or AWS Lex (for Chatbots).
We have also seen use of Azure stack solutions like Azure Synapse, QnA maker and Computervision.
While these are services from each of the cloud providers, we have also built our own vendor agnostic services for each of these areas, which help you port and migrate the solution onto any of these cloud providers should there be cost benefits in the future.
We follow a consultative approach in such scenarios. A two week study of the current landscape tells us the approach we should take to gain maximum optimization. Thereon, it could take around two weeks to frame a proposal for optimization depending on the size of the estate.
For various reasons, many of our customers prefer a hybrid approach. It could be due to compliance concerns, or due to technical limitations. Establishing a secure connection (direct connect, for example) back to datacenters is a very popular method of hybridization. We can split the applications and run some workloads on the cloud while maintaining some on-prem. If it is a technical limitation, we can also look at re-architecting – of course we will have to make sure that such move would be beneficial to the organization.
We have implemented security capabilities at various levels like code analysis, compliance monitoring, vulnerability assessment and threat intelligence. We cover these areas using various cloud native and vendor agnostic tools such as:
  • Vulnerability Assessment – Qualys or AWS Inspector (for AWS Setup)
  • Open Source Vulnerability assessment – Snyk or Whitesource
  • Code Analysis – SonarQube
  • Threat Intelligence – AWS Guardduty and others
  • Compliance Monitoring – AWS Inspector, GuardDuty, AWS Systems Manager or Azure Security Center
  • Threat Modeling – Microsoft Threat Modeling tool
We might need to delve deeper into this particular use-case to answer it accurately. A few questions that come to mind are whether we can compress the data? And if not, can we logically split that data into smaller chunks?
GCP Storage Transfer Service is a recommended method. But in case there are any reservations then we can consider gsutil.
Given that the volume of data is large, it’s best to benchmark any cost impact during the transfer, and then opt for migration. It would also depend on factors like age of the data, number of times the data is being used etc.
We have created our own tools to simulate data for testing etc. Our tool, called the Test Data Hub, mimics and creates data that will maintain a data structure similar to that of the actual data. This can be used to perform tests and simulations.

By continuing to use the site, you agree to the use of cookies. more information

The cookie settings on this website are set to "allow cookies" to give you the best browsing experience possible. If you continue to use this website without changing your cookie settings or you click "Accept" below then you are consenting to this.

Close