Migrating Core Banking to AWS: The Standard Chartered Experience
AWS re:Invent 2021 - Standard Chartered Bank: Migrating core banking to AWS
Estimated read time: 1:20
AI is evolving every day. Don't fall behind.
Join 50,000+ readers learning how to use AI in just 5 minutes daily.
Completely free, unsubscribe at any time.
Summary
In this engaging session, AWS Solutions Architect Xavier Loup and Mitra Heravizadeh from Standard Chartered Bank detail the ambitious migration of the bank's core banking system, Atlas, to AWS. This strategic endeavour is part of Standard Chartered's broader cloud-first strategy to enhance services like virtual banking and open banking. The migration, focusing on modernizing the system's architecture and leveraging AWS's robust capabilities, aims to improve performance, scalability, and resiliency. Despite challenges such as regulatory compliance and legacy system integration, the project highlights significant benefits and learnings, including enhanced performance and reduced downtime through innovative cloud solutions.
Highlights
Standard Chartered Bank's core banking system, Atlas, transitioned to AWS, marking a significant shift in their cloud strategy 🌐.
Mitra Heravizadeh emphasized the creation of cloud muscle through this complex migration, aiming for future tech-ready financial services 💪.
The migration involved significant modernization efforts such as adopting event-driven architecture and APIs to enhance system flexibility and service offerings 🔄.
AWS's robust solutions like auto scaling groups and Aurora PostgreSQL were pivotal in achieving the required system performance and reliability 🎯.
A key challenge was managing varied regulatory requirements across different countries, an ongoing effort enhanced by AWS's global infrastructure 🌍.
Key Takeaways
Standard Chartered Bank embarked on a bold cloud-first strategy, aiming to migrate their complex core banking system, Atlas, to AWS for enhanced agility and innovation 🚀.
The bank's multi-cloud approach ensures resilience, flexibility, and compliance across global operations, supported by strategic partners including AWS 🌏.
Through the use of AWS's advanced features like auto scaling and Aurora PostgreSQL, the Atlas system achieved significant performance improvements and scalability 📈.
Complex regulatory requirements and legacy system integrations were addressed through strategic planning and AWS partnerships, highlighting the importance of regulatory compliance in cloud transitions ⚖️.
The implementation of DevOps practices and infrastructure as code significantly accelerated environment creation, showcasing the power of automation and modern IT practices in cloud migrations 💻.
Overview
Standard Chartered Bank is making bold moves to revolutionize their banking services through cloud technology, as revealed in the AWS re:Invent 2021 talk. The bank's cloud-first approach, as shared by Mitra Heravizadeh, positions Standard Chartered at the forefront of financial innovation, targeting advancements in services like virtual banking and open banking. Their migration of Atlas, the core banking platform, to AWS, serves as a cornerstone of this strategy, showcasing their commitment to staying competitive in a rapidly evolving market.
The session highlighted key modernization efforts undertaken by Standard Chartered, including moving from legacy systems to a more agile cloud infrastructure. By leveraging AWS's capabilities such as auto scaling and Aurora PostgreSQL, Atlas achieved unprecedented scalability and performance improvements. This transition required meticulous planning and significant technological shifts, particularly in adopting DevOps practices and automating deployments, thus achieving faster and more efficient operational capabilities.
Standard Chartered's transition of Atlas to AWS wasn't just about technological upgrade; it also involved navigating complex regulatory landscapes. The bank successfully managed compliance across various jurisdictions with AWS's extensive global presence. This strategic move not only promises improved service for clients but also sets a precedent for resilience and innovation in the banking industry's cloud adoption journey. The collaboration between AWS and Standard Chartered has been pivotal in achieving this transformation, marking a new era for the bank's digital capabilities.
AWS re:Invent 2021 - Standard Chartered Bank: Migrating core banking to AWS Transcription
00:00 - 00:30 (upbeat music) Thanks for joining us. I'm Xavier Loup. I'm a solutions architect at AWS. I will be co-presenting this session with Mitra Heravizadeh, global head architecture and cloud at Standard Chartered Bank. In this session, we will present the migration
00:30 - 01:00 of Standard Chartered's core banking system to AWS. Here's the agenda of the session. Mitra will first present Standard Chartered Bank and its cloud strategy. She will then describe Atlas, its core banking system, and you will learn about Atlas program requirements, migration, delivery, and outcomes.
01:00 - 01:30 For the second half of the session, I will present a deeper dive on Atlas technical solution. I will now let Mitra start this presentation. Thanks for joining us. I have the pleasure of co-presenting with Xavier, our AWS partner, on the cloud journey of our core banking platform at Standard Chartered Bank. Over the next 20 minutes or so,
01:30 - 02:00 I'll share with you a bit of background on our bank, Standard Chartered, our cloud strategy, and cloud migration journey of our core banking platform, and of course a quick overview of the outcome achieved and key lessons learned. Let's start with some background on Standard Chartered. SCB is a global bank that operates in 59 countries across retail,
02:00 - 02:30 corporate, and institutional banking, and also wealth management. SCB have been operating for over 150 years. On our cloud journey, cloud is a corner stone of our strategy to meet the present and future banking needs of our customers. Adopting a cloud first approach makes our vision for next generational financial services,
02:30 - 03:00 like virtual banking, next generation of payment, open banking, and banking as a service a reality. As the disruption in the financial industry continues, we can focus on client benefits while deploying our solution quicker and also faster intuitive integration of new business models and partners.
03:00 - 03:30 Our plan is to move most of our application and data which currently residing in our on-premise physical data center to cloud, of course subject to registry approvals. Over the next five years, we expect more than half of our compute workloads for significant applications, including our core banking trading system and new digital ventures,
03:30 - 04:00 such as virtual banking, banking as a service, to be cloud based. We have adopted a multi-cloud strategy, which gives us maximum resiliency, flexibility, and security across our global footprint, of course supported and enabled by our strategy partners in this space, including AWS. We have developed and continuing to develop
04:00 - 04:30 a number of cloud-based next generation financial service offering, such as Mox, our digital banking home loan, and Nexus, our banking as a service. In these places, the cloud has enabled us to deliver innovative product offerings to our customer. However, the more challenging problem for us was how to migrate our existing, including legacy,
04:30 - 05:00 on-prem application to the cloud as to leverage the benefits that cloud offers. The key aspects of our approach was that rather than starting with migrating all the low-hanging fruits, which of course we restarted, we set ourselves a more challenging objective by selecting our core banking platform, Atlas, as an early cloud migration target.
05:00 - 05:30 I will share with you today how we went about this and key lessons learned. Atlas is a Standard Chartered core banking platform supporting multiple business segments. It accommodates retail and wholesale customers at the same time, and covers the common denominator of all client groups. It's an in-house developed banking system.
05:30 - 06:00 It holds the customers' transaction and static data, as well as product information, as such is the system of record for all products, registry reports, and customer information. Being a key transaction processing engine requiring a high level throughput of course and 24 by 7 availability. It supports our critical banking operations
06:00 - 06:30 and provide the foundation for important upstream processes. It's a complex ecosystem as you can see, with over a hundred integration points, with downstream and upstream applications. Given all this, Atlas does not appear to be a good case for early cloud adoption. So why did we start with a very complex application so early in our cloud adoption journey?
06:30 - 07:00 Good question. Part of motivation of course was that if we can move core banking to cloud, we can do anything. The lessons that we learned through addressing the challenges of complex, highly integrated platform would set us upright as an organization to address the migration of other applications destined for cloud. In short, it will help us develop cloud muscle. Of course this was not the sole factor of our decision.
07:00 - 07:30 It was also supported by other considerations, such as expected cost benefit, improved resiliency and agility, which I will address some of these in just a few moments. Before we cover these benefits, a bit of history on core banking transformation. It commence in early 2000, where we took a number of heterogeneous banking system
07:30 - 08:00 and developed a solution that consolidated all of our products' valuation and capabilities into a single platform. The aim was to move these single code base to set up richer and institutional leads that is deployed in multiple markets. As a result of this initiative, we built a strong, capable core banking team. This provided us with the necessary
08:00 - 08:30 collective skills and experience to successfully execute complex core banking migration to cloud. Before we embarked on the journey of migrating the Atlas to cloud, we did undertake a buy versus build assessment. A number of buy options were considered. Was there was a good fit from an IT risk to market standard standpoint?
08:30 - 09:00 But eventually, we discounted all those options, of course, due to significantly higher cost of integration to the existing ecosystem, and also required customization to support the equivalent functionality available within the existing banking platform. I can say with pride that Atlas is continuously evolving to compete with top class market solutions.
09:00 - 09:30 Harmonization and modernization have been at the heart of Atlas delivery. Having a single code base that is the deployed in all markets provides a consistent experience for global customers, whilst providing flexibility for local customization through configuration.
09:30 - 10:00 Atlas, prior to cloud migration, had already deliver substantial benefits, such as significant uplift and modernization of typical steps which enabled a smoother integration with digital channels, uplifting our customer experience. Scalability and throughput have been significantly improved, as well as availability, which of course is a key requirement
10:00 - 10:30 for the core banking system. These improvement have enabled us to roll our product across markets in shorter timeframe with significant cost efficiency. Focusing back on cloud migration, AWS solution had to meet several requirements to enable successful migration. Naming a few, performance, availability, monitoring, security, resiliency,
10:30 - 11:00 and of course regulator requirements. We started with this position that with our cloud migration, ATLAS to AWS, we need to set the expectation that the key nonfunctional outcomes we maintained or improved upon. AWS very carefully was evaluated against the requirements.
11:00 - 11:30 And Xavier of course will cover in detail shortly how it is we're addressing AWS. I just wanted to call out that the regulatory compliance challenge we had to overcome, we have to engage 50 different regulatory bodies, each with a varying level of acceptance for deploying material workloads into cloud. This is an ongoing and important engagement for us.
11:30 - 12:00 Let's zoom in in some key aspects of modernization. We undertook a two-step approach to help us accelerate our cloud migration journey. Our existing stack require some modernization before it was ready for cloud deployment. It is important to remember that while we had aspiration for cloud,
12:00 - 12:30 we needed to continue to support our on-prem deployments in those markets that the regulatory bodies have not allowed cloud deployments. Continuing with the single code base that can support both deployment model was a key principle that allowed us or still allowing us optimizing the required skills reducing both change and run cost whilst ensuring that we can deliver
12:30 - 13:00 new functionality faster of course. Let's touch on a few modernization aspects. We introduced an event-driven architecture for Atlas which helped us to decouple the values component of the platform and gave us greater flexibility with deployment options. Adoption of APIs was another dimension, which enabled a richer set of services
13:00 - 13:30 to be available over various digital channel iterations. In addition, the Atlas team underwent a transformation in how they built and deployed their solution through adaption of DevOps practices, including the transition to a more automated build and deployment capability. This enabled more frequent iteration and a shortened delivery cycle.
13:30 - 14:00 The key takeaway here is that the adoption of a two-step approach towards migration, while spending the effort upfront to optimize technical stack, positioned us well in preparing the application for cloud deployment. How did our execution approach allow us to be successful
14:00 - 14:30 in our cloud migration journey? A good question. We took an execution approach involving a progressive delivery model that allowed us to undertake the modernization of the application whilst also continuing to provide ongoing feature enhancements. The co-existence of new technology components with the old enabled us to support the gradual and stage roll outs
14:30 - 15:00 in support of the multiple implementation in the various markets. We also standardized values aspects of the solution to remove some customization that had been introduced to cater for market-specific features. Additionally, we leverage some toolings to assist in some aspects of the modernization efforts.
15:00 - 15:30 These approaches allowed us to modernize existing stack and prepare for the cloud migration while still continuing to enhance the core banking platform and deploy upgrades to the various markets. Here we can see a quick snapshot of our roadmap since 2019. In 2019, we undertook the modernization task.
15:30 - 16:00 On completion in 2020, we rolled out Atlas in 14 markets. 12 of these of course were on-prem deployments and our first cloud deployment that supported two markets. By the end of '21, an additional seven markets will be provided by cloud deployments of Atlas. In 2022 and 2023, we will continue to deploy
16:00 - 16:30 both cloud and on-prem instances to support 40 markets. What have we learned so far? There have been many lessons learned as we progress on this journey for sure. I will only share with you the top three. First one, how did we design for speed and performance. We considered the impact of network latencies of course
16:30 - 17:00 when choosing AWS regions, catering for proximity to our customers, as well as proximity to on-prem data centers, where other interface systems are hosted. The second one, ensuring high resiliency and availability, we leveraged and are leveraging AWS capability to design for resiliency.
17:00 - 17:30 Deploying our application across multiple availability zones guarantees no disruption to our operation, even in the event of one availability zone failure. And of course, the last one, we leveraged AWS expertise, and reduce to time to build a first cloud application and also use the tools provided by AWS.
17:30 - 18:00 This allowed us to accelerate our delivery and avoid preventable pitfalls. Our cloud migration efforts will continue, and learning from Atlas experience will allow us to continue to thrive in our cloud journey. With that, I will now hand over to Xavier to discuss the technical aspects of the Atlas solutions.
18:00 - 18:30 Thank you. Let's talk about performance. The key requirements was to be able to sustain up to 4,000 transactions per second, 10 times more under previous on-premises system. The other requirement was to be able to scale depending on the load. So what did we add in the architecture to achieve this?
18:30 - 19:00 The first priority was to add horizontal sustainability using AWS auto scaling groups. The auto scaling group makes it possible to adjust the number of instances depending on the load. This was simple to implement because Atlas has been modified to become fully stateless. Application load balancers were deployed in front of a two auto scaling groups.
19:00 - 19:30 They distribute the network traffic to the different instances. A side cache pattern was implemented for reference that are using in-memory cache directly integrated inside the application. This reduces latency for reference data query and also reduces the load on the Aurora PostgreSQL database. The last component we use to improve performance is PostgreSQL read replicas.
19:30 - 20:00 With these changes, Atlas was able to deliver virtual scalability and performance. We will now dive a little deeper on the implementations of these read replicas. Aurora read replicas reduce the load on the primary database instance by offloading read-only queries. Aurora supports up to 15 readers
20:00 - 20:30 and they share the same underlying storage as the primary instances. These helps lower cost and reduce the replica lag time. For the Atlas workload, SCB is now using two read replicas. Depending on the use case, the requests are sent to either the primary node or to the two read replicas. OLTP requests are read and write transactions.
20:30 - 21:00 They are directed to the primary node. OLAP workload, operational reporting, inquiry, data download are read-only. So we can be directed to the read replicas. These classification of the queries was done early in the project when the SQL code was adapted to PostgreSQL. The usage of read replica helped Atlas increase the performance of its database.
21:00 - 21:30 It also removed the risk of OLTP transactions failing due to large concurrent OLAP requests. Let's talk about reliability. As presented earlier by Mitra, Atlas requires minimum availability of 99.99%. This means less than one hour of downtime per year. Another key requirement is to ensure RPO of zero,
21:30 - 22:00 which means no loss data if an instance or an availability zone becomes unavailable. How does Atlas architecture deliver these requirements. Mostly by leveraging AWS free availability zones. To achieve an RPO of zero, all data is copied synchronously on multiple AZs.
22:00 - 22:30 This replication is directly managed by Aurora, S3, and EFS. No data is lost if an instance or an AZ becomes unavailable. To minimize the RTO, all the class services are deployed across three AZs. Front end application instances are deployed multiple times in each AZ. Aurora, EFS, and S3 are also available across the three AZs.
22:30 - 23:00 The auto scaling groups balance the target capacity between the availability AZs. Enough instances are provisioned in each availability zone to continue handling the load if one AZ is removed. The application load balancers ensure that the requests are sent only to healthy instances.
23:00 - 23:30 And for Aurora, if a primary database becomes unavailable, there is an automatic failover to one of the read replica. These failover typically complete within 30 seconds. On top of this, Atlas has implemented an advance log replay mechanism, a copy of all transactions is sent to EFS in a binary format. They are kept for 24 hours. If a database becomes corrected,
23:30 - 24:00 it is possible to restore an earlier version using the point in time recovery feature of Aurora. The Atlas team can then replay the transaction logs. All transactions are idempotent, which means that they can be replayed multiple times safely. One last thing, direct connect link to on-premises is also redundant. SCB uses two different network providers
24:00 - 24:30 to avoid having single point of failure. With all these changes, Atlas delivers more on the reliability requirements. The RPO is zero and the RTO is minutes. We will now dive a little deeper on how Aurora storage helps increase Atlas reliability.
24:30 - 25:00 Aurora has a log-structured distributed storage. Conceptually, the storage engine is a distributed SAN that spans multiple AZs. Aurora builds its storage volume in 10 gigabyte logical blocks called protection groups. The data in each protection group is replicated on six storage nodes across three AZs. A write is considered successful if at least four
25:00 - 25:30 of the six storage node acknowledge receipt. This architecture makes Amazon Aurora storage fault tolerant. Aurora transparently handles the loss of up to two copies of data without affecting database write availability, and it can lose up to three copies without affecting read availability. On top of that, Aurora storage has implemented
25:30 - 26:00 self-healing mechanisms. Data blocks and disks are continuously scanned for errors and replace automatically. Reliability is one of the key reason Aurora was chosen by SCB for its core banking system. As we have seen, Atlas architecture
26:00 - 26:30 makes it possible to have a disaster recovery between different availability zones of a single region. SCB also wanted to have disaster recovery between regions. In case of a catastrophic event, impacting a full region, this would make it possible to restart Atlas in a distant AWS region. For example, a deployment in Dublin can be started in Frankfort.
26:30 - 27:00 For cross-region disaster recovery, the requirements are different. The maximum RPO is 15 minutes. The maximum RTO is 24 hours. SCB decided to implement the pilot light strategy. In this mode, the data are live in the secondary region, but the rest of the services is down. When a disaster occurs,
27:00 - 27:30 the remaining part of the infrastructure is deployed. This makes it possible to reduce the RPO while optimizing the cost. So what can we do to optimize the RPO? Is it possible to have synchronous replications between regions to achieve RPO of zero? Of course not. The latency between region is too high due to the distance.
27:30 - 28:00 The speed of light is the limit. Can we implement a simple snapshot restore mechanism? Yes, this is possible, given the requirements. However, we can reduce significantly the RPO using AWS native replication services. Atlas chose to use the following components to implement asynchronous replication.
28:00 - 28:30 For Amazon Aurora, they use Aurora Global. For Amazon EFS, they use AWS DataSync. And for Amazon S3, they use S3 Cross-Region Replication. With these native replication services, Atlas will be able to have a cross-region RPO of minutes for S3 and EFS, and of seconds for Aurora.
28:30 - 29:00 This cross-region disaster recovery architecture is currently being tested by Atlas and will be deployed in production soon. We will now do a deeper dive on cross-region replication with Aurora Global. I will show you how this managed service is working under the hood.
29:00 - 29:30 So these are the different steps of the replication flow. First, the primary instance send log reports in parallel to storage nodes, replica instances, and replication server. Secondly, the replication server streams log reports to replication agent in the secondary region. Third, the replication agent sends log reports in parallel to storage nodes and replica instances.
29:30 - 30:00 In case of an outage, the replication server can also pull log reports from storage nodes in order to catch up. So what are the benefits of these architectures for Atlas. First, it provides a high throughput up to 150,000 writes per second.
30:00 - 30:30 Secondly, it then shows a low replica lag, less than one-second lag for cross region. Finally, it enables a fast recovery, less than one minute of downtime after region unavailability. Aurora Global is a key enabler for the cross region disaster recovery of Atlas. As Atlas stores very sensitive transaction data,
30:30 - 31:00 security is of course highly critical. The application needs to be compliant with the regulations from the different countries we offer it in. It also needs to be aligned with the internal SCB standards. To ensure security and compliance, SCB leverages more than 18 AWS services. I will highlight some of them.
31:00 - 31:30 For security, Atlas uses AWS KMS to encrypt all data. They use a bring your own key configuration to keep full control of other encryption master keys. For auditability, SCB uses AWS CloudTrail to log all account activities. The activation of AWS CloudTrail is managed by the group landing zone. For compliance, SCB leverages AWS Config
31:30 - 32:00 to ensure that the resources deployed are compliant with the rules defined at group level. A non-compliant resource would generate an alert and can even be automatically blocked. Of course, SCB also uses other security components on top of the ones provided by AWS. For example, they use Splunk, an AWS partner, to detect some security threat.
32:00 - 32:30 All these services help SCB ensure compliance with internal policies and regulatory standards. As already mentioned by Mitra, Atlas has to be compliant with the regulation from the different countries we operate in. In particular, multiple countries mounted by the financial data
32:30 - 33:00 are stored in the countries of the customer. With 25 launch region and eight more announced, the AWS cloud is a key enabler to ensure compliance with data location regulations. As of now, Atlas is already deployed in two different AWS regions. The goal is expand to a minimum of six AWS region for the next deployments.
33:00 - 33:30 I would like to mention that one of the impact of this global deployment was an increased latency for calls to on-premises dependencies. The Atlas team work to optimize and parallelize this request. This reduced the impact of latency for region far away from SCB's on-premises data centers.
33:30 - 34:00 For SCB, being able to monitor and audit its database activity is a strong requirement for all critical workloads. When SCB started using AWS, Amazon RDS was not offering security monitoring feature. So we were not able to use RDS for critical workloads. This need was taken into account by the AWS service teams.
34:00 - 34:30 In 2020, Database Activity Stream, DAS, became available on Amazon Aurora, and SCB was able to validate Aurora for Atlas. What is exactly DAS? DAS is a feature of Aurora but provides a near real time stream of the activity in the database cluster? You can see here an example of a select event in DAS.
34:30 - 35:00 The given document is very detailed. User, session, IP address, SQL request, number of roles written. All this information can be used for auditability and security monitoring. SCB is then using Splunk to analyze these DAS events and detect potential threats.
35:00 - 35:30 I will now describe how Database Activity Stream is configured at SCB. First, DAS is enabled on Aurora for all critical workloads. This is activated by a DAS administrator who is not the DBA. With DAS, there is a strict separation of duty. DBAs don't have access to the collection, transmission,
35:30 - 36:00 standard, and processing of the streams. Once DAS is enabled, it sense all activity events to Amazon Kinesis Data Streams. This is done with minimal CPU overhead. SCB even uses Kinesis Data Analytics to filter the activity stream. Only the relevant security events are kept.
36:00 - 36:30 Finally, Kinesis Data Firehose sends events to Splunk in near real time. Splunk analysis views data to detect the security threats. This solution makes it possible to detect security threats at the database level. As I mentioned earlier, this was a strong requirement for SCB, and we know DAS is an important enabler
36:30 - 37:00 for many of financial institutions. I would like to insist now on data migration. What are the requirements for Atlas migration? First, it's a heterogeneous migration, from DB2 LUW to PostgreSQL. The amount of data to migrate can be up to 25.5 terabytes for some countries.
37:00 - 37:30 Finally, migration duration is key as the move has to be done in the time slot approved by the regulator. I will no describe the different steps of the migration. At the beginning of the project, the team worked first on the schema conversion. Using AWS Schema Conversion Tool, they were able to convert the DB2 schema
37:30 - 38:00 to a PostgreSQL schema in less than one week. After that, they worked on the modification of the application code. Atlas uses custom SQL queries, so they had to change these queries to take into account the new environment, PostgreSQL, JBOSS, and side cache. AWS SCT helped scan Atlas source code
38:00 - 38:30 for embedded SQL statements and then convert them. This task was definitely more complex. It took a few months to modify the request and tune the performance. The next task are repeated for each country deployment. One week in advance, history data is migrated using AWS Database Migration Service, DMS.
38:30 - 39:00 This data can be up to 1.5 terabytes, depending on the country, but approximately 60% of a total, and it takes up to six hours to migrate. A role level validation is done after, using DMS to avoid any error. On cutover date the remaining data is migrated using AWS DMS.
39:00 - 39:30 This time the amount of data can be up to one terabyte and the migration can take up to three hours. Once again, role level validation is done after, using DMS to avoid any error. AWS DMS was a key enabler for the migration from DB2 to PostgreSQL.
39:30 - 40:00 One of the critical success factor for Atlas was to be able to accelerate the creation of new environments. The Atlas project decided to adopt a DevOps approach. This impacted the processes, the tools, and the culture. The first change was to introduce infrastructure as code. Atlas now uses Terraform to deploy all its AWS infrastructure.
40:00 - 40:30 The second change was to automate 100% of the deployment using a CI/CD pipelines. One of the important benefit of this automation is improved security. Nobody has a direct access to production environments, and all changes are stored in the code repository and can be audited. The third change was to adopt
40:30 - 41:00 a blue-green deployment model. With this strategy, environments become immutable. New versions of Atlas are deployed in a new environment, and when these new environment is up and validated, traffic is shifted to it. The first benefit is the reduced downtime for Atlas, and the overall advantage is the possibility to immediately rollback to the previous version if anything goes wrong.
41:00 - 41:30 The last change was a modification of the roles and responsibilities. With DevOps, the team move to a self-service approach. The project had to redefine its organization and its processes. I would like to insist on a specific metric which in my opinion shows the progress achieved.
41:30 - 42:00 Thanks to the changes describe, the creation of the new Atlas environment was reduced from four weeks to a single day. As quick recap, today's session covered why SCB decided to transform Atlas, the key business drivers behind it, the advance modernization strategy, the reason why SCB decided to move to the cloud
42:00 - 42:30 and how the program is being delivered. AWS is a key partner for this transformation. For example, we help SCB achieve industry leading high availability and disaster recovery with implementing free availability zones in two regions. Data regulation compliance across dozens of countries. Agility from being able to track new environments
42:30 - 43:00 in a single day, rather than four weeks. Finally, performance, by being able to handle 10 times as many transactions per second. Our partnership with Standard Chartered has delivered real business results, and we look forward to working with the bank on the next challenge. We would like to thank you
43:00 - 43:30 for listening to this session. (upbeat music)