Develop scalable Web Apps with Azure Database for MySQL and MariaDB : Build 2018

Articles, Blog

Develop scalable Web Apps with Azure Database for MySQL and MariaDB : Build 2018

Develop scalable Web Apps with Azure Database for MySQL and MariaDB : Build 2018


We’re ready to go. Thanks for Coming. Here’s space in the front if You’d like to sit closer. Given that we have — don’t have Like a ton of people, we obviously are going to do the Session, no question, please feel free to ask questions. If stuff is boring and you want to talk about something else, Please ask. We can try to make it a little More interactive, depending on what you’ve seen before or Not. I’m a senior program manager in The azure data team running the open source database Offerings both the mysql services and Postgres services, in the process of getting mariadb as Well. Andre is an also on the team. We have a guest speaker, neil, who will talk about his Experience running on our service. So, you probably have seen this or heard it, like the service is Now generally available. Who of you has used the service Or is using the service? andrea is. Very nice. That’s good to know — The general availability in march, March 20th, and is now backed by our sla since Then. The services right now is available in 24 regions across The world, and so we are not yet covering the full azure Footprints, we don’t have the full azure reach, but we are Ramping up on it and on boarding the service to more regions with The goal of being available in all regions across the world. So, when we introduced the open source services, like what was Our thinking around that and why are we doing this? First of all, we’re running the community versions of mysql And run them as a matter of service. It’s not an is offering, it’s a Managed service. We run the community versions For you. Then given that we are microsoft And azure, everything we do is geared towards being enterprise ready. So the services, we made sure to write you elasticity in terms of Scaling, so covering a wide band of resources available to you, As well as make these services highly available for you. The next pillar of being enterprise ready is being secure Pliant. If you think about banking Or healthcare, if you think about Banking or healthcare, obviously security is super important, and It’s real difficult for yourself to run a service that is Compliant with different standards that you have to be Compliant with, and we provide this compliance for you. As you’ve seen on the previous side, azure itself is industry Leading in terms of reach across the world with the regions that Are available and as we on board to all of those regions, these Services will be as well. And Lastly, microsoft has a huge portfolio, very powerful tools In the service itself, and we’re integrating those open source Databases to be able — for you to be able to essentially take Advantage of the other tools that exist by using those open Source databases. So, let’s double click on a few of those. Why do we run the community version and what do we mean by managed? So, like if you — who has set up mysql, played with it, runs it? So all of you do this. You do it in a vm presumably or On premises? it takes insulation. You need to do maintenance. Configure this, Analytic you need toy about availability, set up Replication, you probably need a witness that observes the Primary to the secondary, so tease things are very quickly Get very difficult. Then you get into things like, Hey, i need to apply a security patch, right? Either for the operating system or for mysql itself, how do You do this? what do you do about downtime, Right? so, what a managed service Provides is essentially taking all these responsibilities away From you and we provide this out of the box for you, meaning we Obviously take care that the operating systems are up-to-date And patched, and they’re not even exposed to you so you Wouldn’t be able to do it anyway. The latest Patches. Minor versions come out about after a month. As we enrich these offerings, we ches for you as Well. We don’t do major version Upgrades for you because they are potentially breaking in Nature so it isn’t — you would basically have to take a manual Step and operate to a next major version. And then the second thing, the Community — why do we unit mysql? for us, it was one of the key Principles from the very start that you shouldn’t have to Change anything that you’re doing today, right? So if you are running on a mysql community edition, Wherever you come from, you probably have an ecosystem Around it, you have a stack of applications that use them. Our goal was to provide a mysql service that can one to One run these databases just simply in the cloud in a managed Fashion. And then as part of the azure Platform, you’ll be able to run the ecosystem that you have, Either on preparations or on the cloud or a is as well with our service. Those are the key ideas when we introduced, a managed service, Running community edition of mysql. And as i mentioned earlier. Mariadb is not yet 78, and it will go into a public review Later this year. Then i mentioned before, elastic Scaling and the ability to change your Resources. And so what we do on the Platform, we use the same mechanism that provides the high Availability for you to also let you scale your resources in a Fast manner. And the idea behind this is, like owner running in a vm where If you want to actually change your resources, similar to Applying — typically you would have to run on a high straight Setup, you would need to change to secondary first and then fail Over, change the primary, fail back if you need to. If you run a single vm, you potentially take a multi-minute outpitch. If you change the computer simply because the vm needs to Shut down, a new vm will spin up which takes time, and you take The outage for this. Our service utilizes the high Availability mechanism we’ve implemented and i’ll talk about It more later, to let you scale very fast. And i will show you. The advantage of this is that You can provision the resources based on the demands of the Database, all right? so a typical scenario would be, You know that your workload is busy during the day, during Working hours, but it’s almost or close to idle during night And weekend. So we allow you to basically Scale up the database in the morning, let it run throughout The day at a higher configure regulation. Back down in the evening. And this saves you cost in the whole process. Obviously this works better for Predictable bursts than unpredict ones, but it also i’ll Show you — later on in a quick demo, i’ll show you how you can Do this reactive actually based on demand that is unpredictable. Let’s have a quick look at what this can look like. With scaling. You can see on the screen, i Have a mysql database set up in our service, and on the top Left here, what you see is a workload that runs against this database. What this workload does, it does huge updates on the system. So it is a multi-mega-bite update that constantly are Written against the table. You can see here the rate of the Transaction, we get like 0.7, Right? and the average transaction time Is about 35 seconds running there. So this workload has been running since last night, so That’s pretty much what the service does. Now what i’m going to do — i should also refresh this here. Has it changed? no, it’s still running. Changes really slow. So what i’m going to do now is, I’m going to scale the storage off the mysql database, so how It works in our service, with the amount of storage that You’re provisioned, the io you get increases. There’s a ratio of three to one. If you scale up the storage Storage, you’ll be able to get more io performance. So mysql az, mysql server, let me just list the server that We have now. Built scale demo. So you see first the configuration of the server it Is now, a two core, and then you see the storage megabytes, i Hope you can actually see it. That is right here. 512 Megabytes. This is five gigabytes, the Smallest that we allow you to provision. What i want to do is, i’m going To update this mysql server, and if you’re familiar with the Cli, you need to give it the resource group That it’s in, built scale demo. You need to tell the command which server you want to Operate, which is mysql build Scale demo, i believe. We’ll see. And then what i want to change is the storage size. And so now obviously i forgot what is the — what is two tear A bytes, which is the max, so let me just actually — one Second. Does anybody, no, 2Tera bytes? that’s not precise enough. So that’s edit, okay. Paste the command back in. And so 209-7152. So that’s 2tera bytes in mega-bytes, and hopefully when i Issue the command, it will start. Which one is he complaining about? [Inaudible]>>the command is running and You will see in the workload that if everything goes well, Hopefully relatively instantaneously, the average execution time will go down and The transaction per second number will go up.>>This is while the service is running?>>yes. While the service is running. Obviously it’s a demo so it’s not working as i had intended it To be. Now you see it. You can actually see it. Because the screen freezes when I zoom in. I didn’t know that. So you can see actually on the top left here, the number, it Was 0.7, And now the number is 3.0 Something, and i’ll zoom in In a second, and you can see that the average transaction Time is way down. So you can see transaction Seconds is now six, and we get about four of those transactions Per second through the database. So it’s a dramatic increase Compared to 0.7, And the long execution time. And the other thing that you can see, while the command hasn’t Returned yet, because there are mechanics — it’s an Asynchronous request that gets put into a service, that gets Executed, you can see that the actual operation was more or Less instantaneous on the database, and it was zero Connection drop. So for scaling storage, it’s a True online operation and there’s no interruption to the Service. All right. Any questions on this Demo? very good. So this is an example of what we mean with basically elastic Scaling, right? and being able to react to Changes in demand more or less on the fly.>>So are you going to talk more about [inaudible]>>I will, yes. We already talked about — Briefly mentioned secure and compliant, obviously a key thing That we do in the service. We make it secure and compliant. So if you’re familiar with sql database, there’s a long list of Compliance certifications that we have. We are on boarding the open source mysql service to those as well. You can see on the slide the certifications that we already Have. Pci, hipaa are important ones That stand out, together with the iso certifications. There are more certifications in the pipeline, and we will Essentially on board — or get certifications for all of those That also are available in sql database. The other point on this slide Here, it’s an important one, who is familiar with the azure ip Advantage? okay. So if you’re a customer of Azure, what microsoft provides is an unkept i know determine Any indication in case of you being sued for patent Infringement or essentially for license infringement. So what we say is, we stand behind the open source software That we run on the service, and if somebody comes to you and Sues you for using the services and says, hey, essentially You’re violating a license, what microsoft will do, we will Defend you against these types of lawsuits. And so there’s two additional points we will defend with our Patent portfolio, so our processes, if you violate a Patent, we can say, the other guy gets to use this patent, right? So the microsoft patent portfolio will be available for You to use, essentially, in these kind of cases, and then The last thing — also an important thing to note is, what Happens every once in a while, is that patents are being Transferred to a third-party. And they essentially come and Cash in for it. That’s not a general microsoft Practice, but obviously we can’t assure you that this will never Happen. However, what we do is, we give You what is called a spring license, which basically says Even if we do this, we will license the patent for you so That you are not being impacted. And so these are — and i think It’s essentially a unique practice across our providers That we do this, and we believe this is a big, big part of being – – Of giving you assurance that you can worry-free Essentially adopt those open source Services. All Right. Then i talked about what we did, How things look and how they roughly work. Let me invite neil up on stage, who Works — he will talk about how he or his company, what the Company does and how they use our mysql Service. Good morning. Thank you. So, we had a start-up, based out Of california area, and we Closely partnered with microsoft on the go to market as well as Obviously we use most of the microsoft technologies behind The scene. We had primarily focused around Intelligent communication and into chat box and process Automation activities. Here is a list of customers we have. Most of our customers fall into healthcare, one of the large focus areas for this. The second one is financial sector, and third is i.T. Companies, where the security is the key requirement. Most of the time we end up deploying our entire platform on The azure subscription of the customer itself primarily Because of security reasons. So as i mentioned, the three Focus areas, one is the robotic process automation, where we Focus around the document automation, mortgage companies, Healthcare companies. Cognitive automation, becoming a Big use case for us. Most of the financial Institutions, as well as large i.T. Companies are using this Chat box primarily for sales and support activities externally Facing and internally for employees. The third focus area is content management. So we are not into traditional Content management, unlike sharepoint or other solutions You have in the market. We more focus around where we Break down the documents into snap etc. Of information, that Is tailored for end users to use in chunks, whether it’s Responding to an e-mail query or chat, or it could be when you’re Creating documents like contracts or statements, that’s Where we more focus around breaking down the documents. This is where we use mysql behind the scenes to store the content. Value proposition to our customers is probably the Productivity gain where we have — we have the chart bars, We have clear [inaudible] these are becoming very popular, and Which drives more customer experience. So, the platform itself, as i mentioned, we are heavy into This micro content automation, but when we go to the customers, Typically they have a lot of their depositories already Destroys the content, so we focus on how we can fragment the Content on top of the existing repositories and we also manage A repository of our own if they want to use our own. Once we have the content in the micro content for a fashion, Then we apply extensive ml, which we use behind the scene For these activities which i’ll talk about in a little bit about it. So once we have the knowledge in our platform, the end users Would consume this knowledge either through the bartz or Through the widgets. So we ended up building a lot of Widgets embedded inside office 365 apps like excel or outlook, .So the four solutions we have focused on, one is channel communication. Outlook is a big use case for us. There’s internal employee communications where they want to measure the engagement of Employees, whether they’re looking at e-mail or not, e-mail Tracking a big focus area. This is applicable not only in The internet but in case of sales where the same people are Always looking for analytics. Apart from outlook, we focus on All of the associated media channels to auto push the Content but outlook is by far the biggest one in terms of a Use case. The second focus area i mentioned, chat bars and Digital assistance. This is a large execution in the market. We differentiate ourselves Because, our intent analysis on the customer side of the query, We focus, in addition to that, month on the content side of Analysis by knowledge — we continuously — whether it’s Nonmust users or through the known Users, [inaudible], those kind of scenarios. Third one is — it enables Collaboration with external audience. Internal audience we use Microsoft teams because we tightly integrated our solution Into teams, but when it comes to external costumes, on boarding Windows, employees, we use micro sites. Finally the document automation. This is where we process the Documents all the way end to end, in case there’s — we use The ocr and then we apply this to extract the information and Auto classify the information. So, as i mentioned, we mainly Focus around the knowledge automation plus the chat box. Here is a high level architecture of the platform itself. If you look at the bottom two boxes, data source and azure Cognitive resources. Our entire platform is based on Open source technologies. We do not use any proceed proulx Terrorist in the platform itself. We do use azure as a platform as Part of the solution. Mysql is one of the solutions That we obviously use. So we will actually post the Entire platform until about eight months ago. Then we switched and were using rds, which is life a lot easier At that point. And then when we switched to Azure, there were a lot of challenges because these managed Services, mysql was not available, and because of the Security reasons and scaling was one of the biggest challenges. One of our customers, 175,000 Employees globally, so unless you have a scalable solution in Place, it is really difficult to support such customers. That is where this managed mysql really helped us Recently by switching from, you know, our own managed mysql That we hosted. Azure, you can see we use vision Apas for ocr prospect go, we use speech to text and text to Speech both capabilities. And we use language translations Because several of our customers are international customers so We use language services as well. So as you can see, we use Platform as a service as well as infrastructure as a service, and As i mentioned, most of our deployments are on the Customer’s azure subscription, that is where the scalability And easy deployment through the scripting is critical for us, Without which it would have been a monumental task to do manually. Here is a high level deployment architect you’re. Mysql for cloud. Data lake. As i mentioned, these are am the different ones. We even use video index. This is becoming a big use case, Several of media companies where they give us the videos, we run Through the back end engine to tract and transcribe the text, And auto classify them so they can make it through the Consumers through the classification we use. Here’s a list of items that we use on the — everything is on Azure now. And probably i think we have About eight or 10 private deployments on azure, Specifically each customer’s azure Subscription. Just a few bullet points here. We started with our own, just took the vms and installed Mysql. We ran into lots of problems. And we were really eagerly looking for a solution, mysql cloud. There were a lot of challenges. We were one of the first ones to Get onto this infrastructure and we had several challenges initially. Even at one point in time, for whatever reasons, everything Crashed, and we had to restart the whole thing. It is very stable now. We are deploying with our Customers together. The main benefits for us is Security, one, scalability is the second thing, availability. Mysql on the cloud is really helping us. If you have any questions, i’ll be available here. Test test cool. Thanks for joining us. So, let me switch back to the slide. I’m just going to briefly talk About how we, as our service or mysql service integrates with Other azure services, power bi, aks, and as well as more and More azure services that we’re trying to integrate with to make Sure that your lives as developers are much easier. One of the things we’ve been concentrating on is integrating Deeply with app services because we know a lot of app developers Are using mysql in the back end. So one of the things we’ve been Doing is working with the app services team to make sure where There’s a database that needs to be deployed in the back end, That all that is part of the developer work flow, and as you Work through either the portal or your cli experience, you can Deploy a mysql server along with the app service you’re looking at. Let me go ahead and jump into a quick demo. I can show you how easy it is to Deploy a word press application running on app services and have A mysql deploy alongside with it and connect to it with tools Like mysql workbench. If i create a blog post, then You can actually see it on the back end. To actually create a word press Site, super easy. I’m in the marketplace now. Do you need me to zoom in a little bit? So, we actually have a few templates available so the one I’m going to show you is the linux one, if i can Search. What this is doing, it’s going Right into the marketplace and you can see there’s two Available word press by default it runs on a windows vm in the october bandy. In you select linux and you’re paying for a linux service plan, Let’s go ahead and try linux and click on that. Hit create. All right. So here i’m going to name it something, so we’ll call It andrea build demo, and i’m going To select — create a new resource group along with it. Here in my app service plan i can select where i want this to Be deployed. So right now the default is west E west europe. If you have an existing app Service plan, you can add onto it or create a new one. I have one in west europe. Here you can see that when i Actually select database, i can actually create a new mysql Server in the managed service here. So creating this just requires You to create a server admin log-in as well as a password. I’ll provide this with a password. Confirm it. And here you can see i can select what version of Mysql — it’s popping up on the side. Right now we support 5.6 And 5.7 So the default is 5.7. And then i can actually select the size of the database that i Actually want to create and what pricing tier i want to create it in. The default is the general purpose, in our west europe Region and i’m going to leave it at the default, 2v core with Five gigs and default configuration for backup Retention period of seven days. I can actually change the Database name, but i’ll leave it as is. There will be more detail later on about some of the Configurations you can pick for your database. Hit okay. We’ll hit Create. And this just validates that Everything i’ve passed in is normal parameter and validation Successful and it’s deploying now. It takes maybe like a few minutes to create, last time it Took me about three minutes. So you can obviously track the Deployment in progress. And you can actually, when you Click on this, you can see what it’s creating on the back end. You can see now it’s accepted that it’s going to create a Mysql server, and if we hit refresh, you’ll also see that It’s going to create a website called andrea build demo here. So if i actually click on the resource, andrea build Demo, you can see obviously the parameters I selected earlier were in west europe. Within this resource group, Andrea build demo. And this is the url end-point For the word press application that gets deployed much let’s Click on that. It might still be waiting for The database to be created, so let’s double-check on the Deployment in progress real quick. It’s still deploying the database. I have a backup available, so i actually did this before we came Into the session so this is the same thing, the same flow i Walked through, which was to create the word press Application as well as the database, and obviously it gets Deployed along with it. Let’s look at the one that i Already have existing. An url that i get pointed to. Let’s give this a moment. So this is your very generic Word press template that gets pushed onto your app service and This i can walk through to actually set up word press. Let’s do that really quickly. Andrea’s blog. We’ll just say andrea, this is a Password, a weak password but that’s okay. And we’ll install word press to this. Confirm password. I don’t know what it’s yelling at me about but let’s see what’s Going on. Still asking for a Password. Okay. Sorry, guys. Demos never really work for me. Back to the portal. I hid the tabs, yeah. Edge, guys. Edge. Let’s try this one more time. Click on this again. Should be working. Created the site,. Now that you have a word press Site deployed, what do you do with it? You want to create a blog. We log in. And we’ll give it the same user name and password that i just Gave in the previous screen. Who remembers the password i put in?>>[inaudible]>>You guys have done this before Me. Doesn’t like it. Let’s test this. We’re doing this live. Oh, no. Well, that’s okay, because i was also deploying another one, right? Let’s switch back. And we’ll find the one that’s For build. There we go. Build. Andrea build demo. Cool. So, obviously jumping back into this — all right, we’ll go to this. We’ll try this one instead. English. This time i’ll try to remember What i’m doing. Andrea’s second blog. Andrea. This time we’ll change it to something simpler so i can Remember that. Let’s try this one more time. It was build demo all lower case. You guys can help me remember that. Very insecure passwords, i hope That you choose better passwords for your own applications. So while that’s installing, as you can see, other than me Trying to set up word press, the actual flow of actually trying To provision the word press application as well as the Database, was just kind of like a click click click type of Installation, so like i mentioned, one of the things We’re trying to do is ensure that, as low of an overhead as Possible for you guys to actually deploy everything you need. Let’s log in again. Cool. We’re in. So like i mentioned, i wanted to create a new blog post, so let’s Go ahead, and we will — welcome to our session. Call it build 2018 is awesome. I’m just going to publish — Obviously one of the things that we want to do to validate that It actually gets written into our mysql database is to Actually query mysql, so i’m going to jump back over here and I’m going to grab our connection string for the mysql database. One of the things i’m going to want to make sure that i Actually unable is, i’m just going to open the ip addresses, So this is also very insecure so don’t do what i’m about to do in Your regular life. This basically just allows all Ips to connect to my database. So i’m going to jump to mysql workbench. I’m going to create a new connection. And i’ll grab the host name from our overview here in the portal For mysql server.>>Is workbench running locally?>>yes. I have it running locally.>>[Inaudible]>>Not over a vpn.>>Do you have an option –>>We have vpns and v net end-points. We’ll talk about that in a Little bit. Obviously what i’m doing is very Insecure because i did allow everything to access my database. Let’s go ahead and we’ll — my user name, i can grab right from Here. Copy that over. And let’s test the connection. Turn ssl off for now. Apparently i just can’t remember any of my passwords today. We’re just going to do this That’s going to reset the password. Something that you wasn’t meaning to demo but you can do All of this admin stuff straight from the portal, along with the Portal, there’s a bunch of different things you can do to Configure your database. So if you explore like the Connection, security, as well you can change the storage as Well as the v cores and the pricing tiers all within the portal here. Let’s double-check to see if that actually Worked. There you go. So let’s connect to it. Okay. Make sure we store the password in the vault so it doesn’t happen again. We’ll hit okay. All right. Let’s connect to the full database that was Deployed. Cool. So, the table that i’m looking for Is posts. Let me just query that real Quick and i’ll show you guys. So if you guys squint because i can’t really zoom in, but you Can see here that it created two rows. The first row is like your draft post, and then the second one is The actual post that gets published to the blog and you Can see that it’s the same blog that i had kind of created from Our word press application over here. So all in all, the big Take-away, we’re trying to make things as easy as possible for Developers to create their mysql servers in our service Along with any other azure services they need and make it As simple as possible.>>Thank you, andrea. You’ll note there’s all the trust i was trying to build and Saying we’re super secure and then andrea comes up and Violates all the security best practices that we have. Don’t repeat this into your systems, but do it similar. Just kidding. Thank you, andrea for demoing This. So, let’s look at some of the Capabilities and also touch on the high availability. As andrea shows, we support 5.6 And 5.7. We are working on 8.0. It’s going to be a matter of a few months until we have support For this. Right? weeks. Even better. Making up for the security — That’s okay. These are the supported Versions, relatively straightforward. Whatever you use, you should Hopefully have an option to go and find on our service. Yes? [Inaudible]>>yes. So now, high availability. And so with elastic scaling. I only showed one part of the elastic scaling, the storage part. I’m going to show you the compute part later. Let me walk you through how this works. So who is familiar with sql Database? okay. Pretty much everybody. What we’re doing with these services, we build on the Foundation that sql database uses. We didn’t reinvent the wheel but Used the same foundation. It’s an architecture where we Have a management service that monitors the whole thing, and Then when you connect to it, you connect to a gateway. And then the gateway finds where your server is located in our Clusters and connects you to the server. This is the reason why — we’re Not very proud of that one but you need to specify an at and a Server name as your user name when you connect to the mysql server. That’s different than connecting into a plane ais mysql. The reason is that the gateway intercepts the call, looks at This server prefix and looks up where your server is. So it happens, you running the service, everything is fine, now You want to do a scale operation. What we do, we first spin up another instance for you, Essentially, with a change compute size. And then once this is up and Running, we do a shutdown on the previous one, reattach the Storage, and then you can connect to the newly created one again. This orchestration we do to minimize the actual downtime. I mentioned earlier, in a vm case, you would shut down the vm And spin up the new vm and do the failover that way. If you just scaled the vm size, that’s what would happen. Here we orchestrate this to minimize the actual down time That you have. The second thing, as i said, i Already showed it, where we scale the storage and it remains A pure online operation. You’ve seen the workload doesn’t Get interrupted. We scale the storage below you Which gives you more space as well as more Performance. One thing to note, storage, you Cannot sale back down, we don’t actually know what’s happening With your — within your files, essentially, what are the pieces That are used, so we can’t shrink them easily without Basically taking the risk of deleting something we don’t want To delete. So storage is a one-way thing, But the computer can scale up and down on the fly. I mentioned we use the same mechanism for high availability. It goes to your question. What we do is, these services, They don’t run on a vm directly but they run in a container, as Container technology developed to run sql server on linux. It’s microsoft proprietary Technology. A secure container environment. Compared to other account her technologieses, this is like the Secure because of the way it is implemented. What we do, on our end, we run a multi-tenant environment and Spin up individual servers within those containers, and the Advantage of this is that, for high availability, we don’t need To provision a second server and set up replication between Those. And that is because we can spin Up a new service, like in the case of australia, so if you run A command, there’s — the thing crashes, our management service Detects this crash and spins up a new service for you, and this Is just starting a process so it’s very fast. And then the awesome mechanisms That kick in in the — [Inaudible] we basically attach the storage, the database Recovers, and you’re back up and running. This is so fast that we can give You a 99.99 Up time without having to spin up two servers. We believe there’s actually quite a significant advantage That we have overrun go is and other systems because ultimately The — you don’t need the additional replicas unless you Really want them, for example, rescale.>>When you say — is there an Estimate of how fast that would be [inaudible]>>the actual switchover is About 45 seconds. -Ish. And then the part that we can’t Really predict is the recovery time. Just because simply that depends On the workload. And you’ll see in the second Part of the scale that i’m going to be doing, just simply because Of the nature of the workload that is running, it takes a While, but i’ll talk about it in a second. If you have a database where you Just permanently read, never change anything, it’s going to Be very fast because all these changes are persistent through Checkpointing in the process before it, if you’re running Large updates, then you need to basically recover and replay the Lock from the last checkpoint. That takes longer in the Recovery process. Any questions on that?>>is your backup capability a Function of the [inaudible] snapshot of the file system and Is it done in real time while the database is running? We’ve got 24/7 databases. We do our backups from Replication services, which we can take down. And then how is it restored [inaudible]?>>i’ll show you. But it’s running transparently in the Background. Cool. Coming back to security, i need to build up the trust again that Andrea — anyhow, basically when it comes to azure and security, Given that we’re running in the azure ecosystem overall, there Are — other than the security that we provide on the server Itself, and i’ll talk to this on the next slide, there’s actually A set of layers of security that even happen beforehand. The first thing that happened when you connect to azure is That you need to pass our network — there’s essentially a Network boundary. Any traffic that goes into azure Is being monitored, and is being monitored for attacks, dos Talks, these kinds of things. Before you even hit our Services, there’s a layer of protection that happens at the Edge of the azure service. The second piece is essentially The authentication on the gateway, and, again, the gateway Has similar capabilities for boot force attacking, stuff like This. The second layer. The last layer is the athey not Caution of the my sequentially server. When we talk about security, This is a concept that we not only apply to the individual Services but it’s like an azure all app integrated concept that Obviously applies to all services, but specifically also To the database services. Now, the security features on Our side, we just used the native database engine authentication. We don’t do anything special there. Simply because, again, for full Compatibility with that tiny exception of the at sign. Then when we create servers, we create them, what we call Security by default. You saw andrea having to make a Bunch of settings before she was able to connect to the server, And this is because we essentially lock it down by default. There’s no firewall port open. There’s ssl always turned on. And obviously you created a user and password as part of your Setup process, make sure that the password isn’t too simple, Although andrea had ways to trick this. So this is — when we talk about security by default, these are The things we put in place to protect you from making Mistakes, and we make it explicit for you to essentially Open up very deliberately to what you want to let in and how You want to let it in. And we said — we have v net Service end-points. Are you familiar with the sql Database? v net Technology? no? Okay. How this Works, we provide essentially a service end-point That you can inject into a v net. And then what happens is, you Can specify for your pass services, a specific set of Firewall rules geared toward the subnet and then the traffic is Completely limited to the subnet and also — we also assure that The traffic never leaves the azure backbone. So this is the idea of what’s called service tunnel go. This is the idea of the service end-points. So this currently does not get Support, having a private ap address within your subnet, but It’s the service end-point technology is also used by sql Database and other pass services. Does that answer your question On v net? okay. So this will go into public Preview also in a matter of a few weeks. If you’re super eager in trying it out, let us know and we can On board you onto the private preview. That offer is open to everyone. It’s specific to — azure sql data because has the same technology. You can go and use it and it’s the awesome technology. And same setup, essential will you.>>Does that require anything Running? like this probably requires Windows on the machine or could it do it from a [inaudible]>>Doesn’t matter. The service end-point — you set Up either a linux or a windows vm that is part of a v net and Part of a subnet and you inject the service end-point, and Essentially when you try to connect to the service, it knows To look for the service end-point. That’s how it works. And then lastly, the ssl connection. What we do, we encrypt all data at rest. All the drives are encrypted, all the backups, the protection Of somebody coming into the data center, grabbing a disc and Running away with it. In these kind of cases, people Would not be able to use and read the data. We actually rely on the underlying storage Technology, and encrypts the data as stored On the discs. And so now, i’ve got a couple More slides. However, we thought maybe it Would be good to just show you –>>can i ask a question. How is key managed handled for that? Let’s say someone did steal a drive or a key was compromised, How do you select a new key and reencrypt the data?>>the keys are being rotated. So i think — i’m not sure if it is 70 days or — [Inaudible]>>how would that effect Long-term data? [inaudible]>>the keys are Rotated [inaudible]>>cool. So, showing the portal on the slides. Portal, okay.>>I wanted to ask a question Ct-about [inaudible]>>He’s the boss. So he’s like — so, short-term, no. Is the short answer to your question. What we’re doing, we’ll have a Slide on it, the first step for us to do is read replicas for Mysql specifically to be able to do readable secondary within A region. The next step — be careful what I promise because andrea will be yelling at me, but then the next Step after that one is to be able to allow this — to Distribute it in different regions and to cover some of the – – Basically reduced latency with local reads and stuff like this. With the multi-master, that’s obviously nice to have. It’s also slightly more difficult to do. So there’s no — there’s no immediate thinking on getting This done easily. There may be some things we can do, but it’s not yet basically a Committed feature. It’s not an uncommon ask, Ct-though. [Inaudible] [Inaudible]>>there are so many areas, They’re all proprietary [inaudible] We are limited in terms of how we can bring such [inaudible] But we are in discussion with some of the partners on options [Inaudible]>>[inaudible]>>It uses the built-in replication.>>Does it support features like Slow replication or delay? if somebody goes in and Accidentally drops a table, that’s what slower application Is for so you have a graze period where you can recover from the [inaudible]>>That i’m not aware of. But andrea looks like she’s also Not aware of it. But let’s — we can dig into This more. Obviously we allow you to do Point in time restore. Let me talk about this and maybe That — that may solve your use case, but i don’t know.>>Just curious if [inaudible]>>honestly, i don’t know. Andrea, that’s an item on your list now. So, let me quickly talk about backups and how we do it. So, you’ve seen the screen already in andrea’s demo Earlier, and i’m confused for a Second here. What you can do is configure the Backup retention period. So the default is seven days. What we do is, we take backups and it goes to your question, we Take backups every five minutes. And different types of backups Along the line, like a fullbackup, lock backups. This allows us to provide you point in time restore capability To any second within your retention period. If you basically go in and drop a table by accident, in the Worst case you have to wait for five minutes, which is basically Where we take the next backup, but then you’ll be able to Restore for any point in time within this time window. Like if you make a mistake at, i don’t Know, 11:15, in the worst case you wait, wait five Minutes, and then go back to 11:14:59 and your table will be Back there.>>How is it actually — is it a Snapshot process [inaudible]>>no, not at the moment.>>Is it locking files as it does backup?>>it does not, no.>>So how does it prevent orphan directories, let’s say it’s Taking the backup well and stuff’s hammering away at it and There’s records going in uneatable that have dependencies On records going in another table, how are you preventing The restore that you might have to do from being corrupt because Of missing records and stuff like that, or partially written Stuff? that’s the part i’m not really understanding.>>Let’s take that offline, just simply because otherwise we’re Going to be running short on time, but we can walk you Through this. What you can do, you can change The retention period to up to 35 Days, which then gives you the capability of storing — Restoring to any point in time within the last 35 days. The next question will be, what about long-term retention backups? That is something we don’t yet have built into the service. There’s a work around currently you need to do. It is on the roadmap to provide long-term retention backups as Sql database has it.>>One other question. We’ve got some tables that [inaudible] You could never ever use a sql dump — it’s not so much dumping The data that’s the problem. But if you ever had to get it Back in, you’re looking at weeks to read it back in. So we stop the replication server, back up the physical Files, 15 minutes, then we have to put it back, we reattach, It’s quick and reliable. Are you actually –>>It’s a problem we will have to Solve. And no. The restore operation takes i’m Sorry time because we need to essentially apply — we take the Fullbackup, apply the incremental backups and the Locked backups up to the point that you need it. Replaying the lock is obviously an operation that requires its Own computes and takes time. It depends on the size of your Data, how long this operation will take.>>I have a question. [Inaudible] zero.>>The other thing to point out is, we allow to store the Backups in two different ways, locally Redundant, the key difference between them is the Local redundant backups allow you to restore any point of time Within the region. If you just plainly after oops Recovery, like a typical development environment, stuff Like this, we just deleted something by accident and you Want to get it back, this is where local redundant will help you. If you also are after disaster recovery, and it goes to the Scenarios that you mentioned, like what you can do is use Georedundant backups. We store them not only in the Region you’re in but in the azure geopad region. And so it allows you to do, you can restore from those backups And the geopad region, to any other region in azure. So if things like, if godzilla comes and stomps onto japan and Hits our data center, you have done the backups, you can Restore your database to any other region that is available And be back up and running. The restore point objective is About an hour, which is the delay that it takes for azure Storage to get the data into the other data center, so in the Worst case scenario, you lose an hour. And then obviously you’d have to Factor in the restore time of your database that you would Potentially have to take as an additional down time, in case of Such a scenario actually happening, which hasn’t so far. But these are the two different storage options that you have For backups. Okay. Any questions on backups?>>is there any feature for Recovering from [inaudible] if They were trying to recover from a failure?>>Not quite sure what you mean.>>There may be data in the bin Logs that hasn’t gone into the tables yet if there was a failure. I’m imagining –>>the servers will recover this.>>It will?>>Yes. In case of a failover, we do Replay the log up to the last consistent stage, yeah. Okay. Let me quickly show you some stuff on metrics. We provide a set of metrics that you can monitor. This is the data because that was running, the workload that i Showed you earlier. And what happened is, while this Was bound by io initially, and you saw that the rate — the Throughput went up when i changed the storage, now the Workload is actually cpu bound. Which is like now you don’t max Out the storage anymore, but you max out on cpu. What i’m going to show you, and this is the second part of the Scale demo, i’m going to show you how you can automatically React to such a change. I’m going to create an alert on A metric. Who’s familiar with alerts? Okay. Some of you. Basically what this does, it allows you to Specify — it allows you to specific a rule on A specific metric, and then once this rule is basically — once The metric reaches a certain threshold, this alert gets Fired. And so you can see build scale Demo as my server, and if we scroll down here, i’m going to Choose metric here, and you can say the condition, and you can Say greater than, greater, equal, less than, so we use Greater equals. And for this thing to happen Very quickly, i’m going to choose the threshold of 1. Obviously that doesn’t make any sense for a production system, But for the sake of the demonstration. And the alert should fire when this happens within the — over The last five minutes, which is the smallest interlude. You can say, if my cpu was over 1 in the last five minutes, Please fire this alert. And you have different ways of Hooking up with this. You can have an e-mail sent to you. You can call a web hook. Or what i’m going to do is, to Call a logic app. There’s no specific reason — There is a reason why i should do logic app. We have a csa that implemented this for us for demo purposes And we just are using the demo and we’re going to publish this. The alert will call the logic app and then the logic app Triggers a scale operation on the database. So now i press okay here. The alert is being created. And while that happens, i’m going to switch — turn on this Window again and hopefully if everything works, and we haven’t Been super successful in demos, you will see in just a second That all these green little things on the left there, they Will turn purple, which means the connection is lost, and i Will try to reconnect. And this is when the actual Scale operation starts on the compute side of things. And so, last fired three minutes ago. It seems not accurate, right? There we go. Okay. Thank you,. Now you can see the scale operation started. All these connections are essentially dropped. And now what kicks in is, in the Background, the service detected — basically the Service got the request, we spun up the new compute, and now Given that the workload is just doing huge updates, essentially All of this stuff is in the locked files that needs to be Recovered. So this just takes some time. I’m not going to wait because we actually are going to run out of Time here. But you will see at some point This will come up and maybe at the end of the session, we’ll Switch over and then the instance will be scaled from two Cores to four cores. And it will actually — given i Put 1, it will continue to scale it all the way up and you Will get a huge build at the end. You need to be careful with These things. That’s the idea. What i’m showing you, you can automate if you want and scale down as well.>>Is this all priced using that same request unit?>>no, no, no. This is based on v core and storage data provision. If you look at The pricing tier, you can actually see the — like How you provision it is with v core in storage. These are the — essentially the come update and storage that you Choose, and then it’s a cost [Inaudible], a multiplier on top, and it’s the price per gigabyte. You can provision in one gigabyte uncorrects.>>You’re charged for throughput as well.>>Just storage and cpu. There’s a — like if you take Data out of azure, you get — you get charges for network Traffic, outbound network traffic. But that’s not Service-specific.>>[Inaudible] there’s an Immediate cost. Okay.>>All right. Let’s switch back to the deck Because andrea wants to do more demoing. The things that i just talked about, there are slides in the deck. I’m going to jump over them because we already touched Them. This is the read replica stuff i Mentioned before. There are going to be two things. Ones is data in replication, allowing you to set up Replication from on eggs preparations into the service. The other one is going to be readable secondaries within The — within the region. We’ll allow you up to five that You can connect to and have read workload run against. All right? any questions on that? no. Thank you. There we go. Best practices, also. Which one do you want to see? do you want to talk about best Practices or do you want to talk about getting your data into the Service? any favorites?>>Migration.>>Best practices, we’re going To skip. Andrea, come back up.>>I know you like best practices.>>He’s going to be very disappointed.>>Okay. Well, hopefully –>>You need to make it fast.>>Hopefully the demo works well For me. This afternoon my demos don’t Like me.>>There’s no dependency on any [Inaudible]>>nodb. Really. [Inaudible]>>Even on mariadb it will be [inaudible] [Inaudible]>>all right. Hey, guys. Can we do this offline because i Want to show the migration and how simple it is because that’s Important in the big take-away. What we have available today is The azure database migration service, so if you have Questions, you can bombard her afterwards, but basically the Database migration service supports a bunch of different Things and a bunch of different source pairs. What we have available now is sql hosted on prem or another Cloud, sql server to our sql managed instant service which is Available now, as well as sql to the azure sql database service And then oracle to sql database. What i’m going to show is is how To migrate from a mysql server hosted — we’ll call it on pr em To our mysql service. So, in the portal, what this Looks like, i actually have a migration project available here. And this — you can sign up for private previews if you’re Interested, we can get you up and running here. I have an existing project, and as you can see here, i have a Source server, and a vm i have hosted in azure, and i have a Target server that i have created in our service. And what i’m going to do is create a new activity. And what this will do, it will actually start a migration, a Continuous migration. What continuous migration means, It’s minimal downtime migration. It will continuously migrate the Data from your source to your target until you tell the Service when to do a cutover. This is kind of in scenarios Where you have an application that can’t afford a large Downtime, in some cases if you’re trying to migrate from Mysql to my sql, you might have to shut down your server And then migrate all the data and switch your app over. What this will do, it allows you to continuously migrate until You’re ready to hit the button and you hit the switch and say, Hey, service, let’s now migrate, let’s switch over the Application and you select when you want to do that. So, this is just to access my demo. All right. And this is — i’m providing the details for the actual source, Source,which is the mysql i’ve hos ted On this vn, and this is to connect to the target, which is A mysql server in the mysql path service. And what i’m going to do is, i’m going to just select one of the Databases that i have available, and it’s an inventory database, I’m going to hit save. And i’m going to just call this Activity run now. And i’m going to hit migration. What this is doing in the back end is, it’s starting this Migration, so if i refresh this, It’s now showing that it’s in the migration pipeline. So i’m going to switch over to an app that i have connected, And i have this app that’s reading directly from the local Database. And so you can see here local Host and it’s just showing a bunch of inventory that i have For “star wars” movies. I’m going to switch to two Different python scripts i have, and this one over here is going To query from the actual mysql server that i have hosted in our service. So you can see i’m connecting to a build demo, logged In, and so what this will do, it’s going to Query the last available order so it’s connecting — it’s going To tell me the most recent order number i have available. The last order was no. 41. You can see here, same thing in the app, no. 41. And what i’m also going to do on this other script here, i’m Actually going to start creating orders in the mysql server That i have hosted locally, and what we’re going to see is, now That i have the migration up and running, let’s double-check that It is — let me refresh this real Quick.>>I can see it’s been running For a minute now. So i’m going to start basically creating new Rows inside the server hosted Locally. Wrong password. Passwords are not my friend today Sorry, guys. There we go. Yes. Something worked. So this is going to start creating new orders. You can see it’s creating order No. 42. Now it’s creating no. 43. So what we’re going to try to see here is, if the migration is Working correctly, what we’ll seeings that the latest order Will actually start to update over here. Let’s give it a bit. You can see latest order is 42, And the migration is happening in real time basically to the Source server — between the source server and target server That i have hosted in the azure database. We’re caught up to no. 45. When you want to finish your migration and you’re ready to do The application cutover, what you’ll do is essentially stop All traffic to your source — your original server, the one

Leave a Reply

Your email address will not be published. Required fields are marked *