Inscrivez-vous maintenant pour un meilleur devis personnalisé!

OpenStack Podcast #27: Monty Taylor

Sep, 02, 2024 Hi-network.com

Few have been involved in OpenStack as long as Monty Taylor, who's OpenStack path began at Rackspace and continues today at HP where he is a Distinguished Technologist. Watch or download this OSPod episode to hear Monty's thoughts about:

  • His love for Duke and his background in theater
  • Nebula shutting their doors, and what it means for OpenStack
  • Why AWS isn't a fit for everyone or everything
  • Project Shade and varying behaviors behind APIs
  • The chicken and the egg problem running OpenStack as a public cloud provider
  • Containers and Project Magnum
  • OpenStack and the "sophomore slump"
  • The importance of focusing on stability in OpenStack

You can follow Monty on Twitter at @e_monty and on IRC at mordred. Have a show idea? Tweet Jeff and Niki at @openstackpod

See past episodes, subscribe, or view the upcoming schedule on the OSPod website.

For a full transcript of the  interview, click read more below.

Jeff Dickey:                All right. Good morning, everyone. I'm Jeff Dickey from Redapt.

Niki Acosta:               I'm Niki Acosta from Cisco, and we have an awesome guest with us today. I'm really excited. This is kind of full circle because I met Monty a very long time ago, back in the early days of OpenStack. Today, we're welcoming Monty Taylor from HP. Distinguished engineer. Technical committee, OpenStack Board. What did I miss, Monty?

Monty Taylor:            Rabid Duke fan. Rabid Duke Fan.

Niki Acosta:               Oh, yes. A Duke fan.

Monty Taylor:            Got a ... we can't leave that out today, of all days.

Niki Acosta:               I'm sure you won some bets, I hope? I mean, just kidding. Who bets?

Monty Taylor:            I won some bets that I was going to be able to get off the floor at the end of the game, so that was ...

Niki Acosta:               (Laughs)

Monty Taylor:            That's really all I'm forward at that point. Hopefully, nobody who's watching or listening is either from Wisconsin or Kentucky.

Niki Acosta:               Sorry. Well, we're glad you're up and awake, and drinking your coffee, and here with us today. We typically like to start the podcast asking you your tech story. How did you get into tech? Were you like that nerdy kid with the computer? What's your story?

Monty Taylor:            Yeah, I was that nerdy kid with a computer. My first computer was a Texas Instruments' 994A which is so indicated because it had a whopping 4K of RAM, which was a giant deal. We were very excited about that. My dad brought one of those home and set me up with programming tasks for fun and I did them and enjoyed them.

It's always been there. You might think that that would have led to me appropriately going to school and studied Computer Science and done that entire path, but you'd be wrong.

I did go to school and studied Computer Science for half-a-semester and then ... ever since then was a theater major. I have degrees in various theater topics which makes me very well suited to writing cloud's opera. I hope everybody enjoys their ... the theater training background TC member.

Niki Acosta:               I was a theater nerd. What was your most fascinating role that you played that you loved the most?

Monty Taylor:            Fascinating role that I played? Anybody that knows me on IRC, I'm Mordred there, and that actually stems from having played the role of Mordred in a production of Camelot in the ... my junior year in high school. That's been a moniker that stuck for quite some time now.

Then, actually, in college I moved into directing and also lighting design. There's all sorts of ... I haven't acted in a show for a long time and that's really better for everybody. We don't need any of that.

Niki Acosta:               No, you do just fine, helping move OpenStack along. Speaking of which ... By the way, did I cut you off from telling the rest of your story? How'd you get into OpenStack?

Monty Taylor:            There's this piece of software that people may or may not have heard of called MySQL. It's an open source database package that some people use. It's got a few prominent users out there. Google, Facebook, Twitter. Yeah, pretty much anybody who's anybody that does anything in the Internet is a MySQL user. We kind of dominate the world.

Anyway, I work for MySQL in ... back in the day. When we got bought by SUN Microsystems, who's also a company that you may or may not heard of since they don't exist anymore. We got bought by SUN. Brian Aker, and Jay Pipes, and I, and a couple of other people started working on a fork of the MySQL server, called Drizzle, which was sort of back to the roots kind of effort which is a lot of fun.

Brian, being the amazing human that he is, managed to convince SUN executive leadership that they, after having bought MySQL for a billion dollars, they should also fund the fork of it, which was amazing. Then, we got bought by Oracle.

Almost immediately thereafter, the Drizzle team moved en masse from SUN/Oracle to Rackspace. I happen to be at Rackspace with Jay, and Eric, and crew when OpenStack started. I've ... I got into OpenStack because I was one of the people that the management at Rackspace asked to help put it together. It's just been something that I have done, I guess, for as long as it's been around.

Niki Acosta:               Now, HP ... wait. Before we get to the ... No, let's talk about that now. How did you ... What are you doing at HP now? You've done a lot. You've ... Actually, you moved a little bit but you've always been related in some way, chained, to OpenStack.

Monty Taylor:            Yeah. No, it's really ... Amazingly, so before this ... before OpenStack, I've had more of a tendency to move around to lots of different places, and that's not really been as much ... I mean, I've moved from Rackspace to HP, but that was three years ago. Pretty much, I've been doing a lot of the same thing.

I have a bunch of people working on a team that work on the OpenStack infra things, which I couldn't be happier that HP's helped to fund to the levels that it has. You know, had a bunch of folks work on the TripleO stuff. I've got people working on Ansible things now, so it's a lot about the automation, testing-automation, deployment automation, all those things that are on it.

Basically, I don't like doing repetitive stuff myself. I think they're very boring. Anytime I can get involved with making sure that repetitive tasks go away, then I tend to be happier. Sometimes that will make other people happy, and sometimes it doesn't, and I'm fine with that. Whichever one they prefer.

Niki Acosta:               One question, and this is the one that's been burning in my mind to ask you ...

Monty Taylor:            Ask the question.

Niki Acosta:               ... because we talked a little bit about the show earlier, before the show started. This is an interesting week for OpenStack, and that is because of the sudden news that Nebula was shutting down.

Monty Taylor:            Yeah.

Niki Acosta:               Which ... There's been viewpoints from all sides on why. Does it mean that OpenStack's not doing well? There's just a gazillion and one opinions on that. What is your take ...

Jeff Dickey:                Well, Chris Kemp was supposed to be on the show today.

Niki Acosta:               Yeah, he was. He cancelled the day before that news went live. After, of course, it went live it made sense why he cancelled this week ...

Monty Taylor:            I know.

Niki Acosta:               ... which, you know, bless his heart. I love Chris. What are your thoughts on that? Is it just where were at? Is it telling?

Monty Taylor:            No. I think it's kind of where it's at. I mean, first of all, I want to say ... and I've tweeted this out, but there's a difference between doing a quick tweet and actually getting to express something slightly in a longer form.

Nebula battled above their weight class, for a ton of time, in terms of their resources and effort that they put into OpenStack. They've been there since, you know, obviously several of the original authors of what is now Nova. They're the ones who brought us a lot of these pieces in the beginning, but when they started ... when they spun off and started Nebula as a start-up company, they didn't just go off into a corner and worked solely on product things. They continued to contribute upstream.

If you think about a company that has somewhere between ten to thirty people, having any number of full-time people focused on upstream, that a hu- ratio-wise that's a huge amount of effort. The number ... I've got forty people doing pure upstream development at HP, but HP is a three hundred thousand-person company.

The relative cost of that for HP compared to what Nebula was contributing in terms of contribution per capita was mind-blowing. I got to give them full respect and full props for having done that for that period of time.

That said, they tackled software and hardware. To get into a business, honestly, where you're playing hardware ... When they started the company ... I mean, best I hear, when they told me about what they're going to do, I was like, "What a ... Obviously, what a great business model. What a great business plan."

Then HP, and IBM, and CISCO, and DELL basically all jumped in with both feet. If you've got all of the big players doing this thing in the hardware play and you're a start-up, it takes a lot of capital to deal with the hardware things.

If you're dealing with hardware things, and software things, and integration, it's a really gutsy move. In fact, I think that I'm proud of them for having taken that gutsy move. Gutsy moves aren't gutsy moves if sometimes they don't work out. If all of your moves ... if all of the moves that you try succeed, then that means you're not trying hard enough. It means that everybody didn't take too big of a swing.

I think, as a community, we should really be proud of Nebula for existing. We should be really proud of them for having taken a giant swing at a combined hardware-software play as a start-up, and to know that amongst all of us, you know, some of us have gotten bought, some of have failed. That means as a community, we're trying hard enough. It means that we have people making gutsy choices and gutsy moves, and sometimes it's going to work out.

I think that it's a testament to how many of the big players have gotten really fully vested that a hardware-oriented start-up company is having a hard time making inroads. That's means that, as an industry, we've gotten really good positioning, I think. I don't know. There's a reason that I'm not a stock investor, stockbroker person, so I could be very wrong about many of these things. I just hack on Python code.

Yeah, I think it's a thing of shame. I think that Vish and Chris are both ... have done immeasurable things for our community. I'm sad to see them go. I'm sad to see Nebula not be a thing anymore but also good for them for having gone out swinging. I think that's pretty awesome.

There's always ways you can figure out how to twist your start-up and do a way to appease yet another round of funding zombies. They didn't do that. They stayed true to a product vision and didn't quite work out, and I think that's great.

Jeff Dickey:                Where does this leave the OpenStack community? I mean, it's seems like we are all either OEM or Mirantis. It's ... There's not ... I mean, this is very different than it was a year ago, or two years ago, five years ago.

Monty Taylor:            Yeah.

Jeff Dickey:                This is ... We're in a different space, where every hardware vendor out there has OpenStack plug-in or capability. They don't ... It's all very vanilla OpenStack support, and now there's all the OEMs. Where are we?

Monty Taylor:            I think ... I mean, I think that's a ... I think we're definitely in a transitionary period, because things have changed. I think we've learned some things. Based on what we originally thought we were going to be doing, I think we've learned some new truths about that honestly.

I know ... I've had some conversation with people with there was this original idea, and I know I was a big proponent of it, that we're going to have fifty OpenStack public clouds that were all seamlessly interoperable. We're going to take out Amazon and Google by the collective force of all those people.

I think we've gone to learn that that's actually what our value is, and not what it is that we do or not what's going to be likely. We have some conversation at the Board, the last board meeting, about this. Ultimately, even if you have all of the companies, all of the OpenStack companies, having some Rackspace or HP-like public cloud, each of those individual companies is never going to do price comparisons with Amazon and Google. It's just not ... it's not feasible because they're underwriting it for other reasons.

Cloud, for each of them, is an afterthought in terms of like, they're going, "Ooh, I've got some extra data-centers-worth of gear. I might as well throw some cloud on it." That's fine. That's a very commodity play. What the thing that we have though, that they never will, is the thing that I've been, potentially incorrectly, complaining about which is the massive of configurability that we have allows us to be suitable for more workloads.

For you to really run your production workload on Amazon or Google's public clouds, you have to buy into their worldview about what a workload wants to look like. You have to write your app in the Amazon way. You have to write your app in the Google way. You have to buy into this cloud boundary- twelve factor app mantra, and it's great. If that's what your app needs to be, write it's full factor man, go nuts. That's fantastic.

In the real world, it turns out there's a bunch of workloads that don't fit that model. Rather than trying to take your workloads, and take your business, and fit it to some prescriptive model which really only exists because it's the way that Amazon makes the margins. It's not because it's a good design, it's how Amazon rolls out their sof- their servers.

Rather than trying to fit your workload to that, we got a cloud product that gives you eighty, ninety percent of the things that are the same. You can use the cloud paradigms, but you can actually tune a cloud to your workload to make the choices that make sense for what you're doing.

You can have per data locality, per regularity region, clouds for your business. You can still have your developers writing things, taking advantage of cloud bins. If you look at my favorite example, which is the thing that I'm doing because I'm narcissistic, I like to talk about myself.

OpenStack infra is running a giant cloud application across a couple of different clouds, and we have each of the different types of things that people say that you either do or don't want to do in cloud. We do all of them.

We have many special paths in our infrastructure. We have machines that we've spun up and we care for like they are traditional IT applications. We're running them in the cloud. We haven't transformed them into cloud-native applications because the cost of doing that would be insane.

It doesn't make ... it doesn't provide any value to anybody for us to do that. It turns out, you can run those things in an OpenStack cloud. It works great. Now, if we're running our own cloud for that, we might make some different deployment decisions to, say, understand that we're going to be running really special things that we want to be highly-available and stay up all the time.

And, we might have a second cloud that we know we're going to run nothing but ephemeral workloads on, but we don't need the cost of high-availability. Both of those are going to operate in the same way, and that's a thing that Amazon or Google just flat can't do. They have to play the numbers game, they have to play the margins game, and we get to play the flexibility game.

We get to say to them, "You know what? We can be the thing that you can customize for your business, and so that you can run the business that you need to run." Rather than needing to run the thing that we tell you because we just don't have the flexibility to offer you anything else.

We're going to backwards invent some theory as to why you should run your application this way. Anyway, I don't think I actually answered your question there, but I talked for like, at least, an hour.

Nika Acosta:              (Laughs)

Monty Taylor:            I think we're in a good place, because having those things start to come out I think we're seeing the next generation of OpenStack surge. It's time for the next wave. Like Akanda and those guys that just spun up, with Mark McClain, and Sean Roberts, and those guys over there. That's a new thing and they're a start-up focused on a very specific topic. They're not out there to be another OpenStack distro. They're ... that space is covered. We don't really need any more of those.

They're saying, "Hey. There's a specific problem that a certain customer segment is going to need that will help enable a certain workload, or a certain profile, and we're gonna tackle that." In the OpenStack framework, then people who are ... since all of the big guys, the HPs, and the Ciscos, and the IBMs are rolling out OpenStack to everybody, then this gives ... This is the ecosystem we've been talking about all along.

This is the marketplace where the start-up companies can make these new products and offer them to people into an OpenStack framework. You're not trying to sell into some giant bank with your end-to-end solutions saying, "Hey. Well, you've already gone OpenStack. If you buy our little thing over here, then this will make this particular thing you wanted to do in your OpenStack environment better."

I think that opens up a whole new set of doors for people, if we're not trying to solve the one-cloud-to-rule-them-all problem which, I think, ultimately we were never going to achieve in the first place.

Niki Acosta:               Speaking of one-cloud-to-rule-them-all, I think part of the allure and magic in OpenStack is knowing that your APIs don't really change.

Monty Taylor:            Yeah.

Niki Acosta:               There is the matter of the behavior of what happens behind the API ...

Monty Taylor:            [Inaudible 00:18:42]

Niki Acosta:               ... that changes quite a bit. Even though you should expect some level of predictability, that results may vary.

Monty Taylor:            Yeah.

Niki Acosta:               I know you're working on some stuff lately that is working to address that. Can you tell us about Shade?

Monty Taylor:            Yeah, I'd love to. I'm going to ... just as a warning, I'm going to pick on Glance a whole lot as a place of example just because it's easy to. I can pick on everybody. Literally, every OpenStack project has some things like this, and I think it's okay but it's a thing we have to deal with.

I mentioned earlier, OpenStack infra runs across a couple of different public clouds and the node pool that we use to provide test resources for all the OpenStack developers is an elastic ... I mean, it couldn't be more cloud-native. It just literally spins up and tears down VMs all day long.

Our usual rate is somewhere between ten and twenty thousand VMs a day. It's a pretty big beast and it runs across HP and Rackspace. It turns out there is a decently large amount of business logic we've had to learn to successfully run at that scale and that volume across two different public clouds. In many situations, they're each doing legitimate valid OpenStack API things. They're not ... Neither one of them, neither HP nor Rackspace are doing bad things.

We're not working around vendor incompatibilities. We're ... The things we're having to work around are HP has deployed Neutron, with floating IPs for VMs and that's how you get a public IP. Rackspace hasn't deployed Neutron and you get a public IP just by spinning up your server in Nova.

Glance, to upload an image in Rackspace, uses the Glance v2 API, requires you to upload your image to Swift and then do a Glance task create image import command to import the image into Glance. In HP cloud, it's the Glance v1 API and you upload directly to Glance endpoint using the Glance image upload API call.

These are just two, off the top of my head, differences in how those clouds operate that are completely valid. The APIs are fine. Your discoverability through Keystone is fine. You've got to know some things if you got to do those.

What I really want to do at the end of the day is I want to do, "Please give me a server with a working IP address." I want, "Please upload this image." I don't care whether it's uploading this web-first and then importing, or whether it's uploading directly to Glance, as a user.

We've had to encapsulate a lot of that inside of our own node pool software. A few months ago, I'd started ... I guess, it's been several months ago now, I started working with the Ansible guys upstream, on the Ansible OpenStack modules that ... the modules in Ansible that actually operate the OpenStack APIs.

They were a mess, and that's also fine. I don't want to pick on anybody there, but they were getting a little long in the tooth. They didn't support Keystone v3. They didn't support domains. They didn't support portable off. Looking at what's going to take to do that was getting a little bit crazy.

What we decided to do is take the logic that was on the Ansible modules, take the logic that we already had in node pool that we're just seeing a lot of production usage every day. Combine them into a library that could be reused by the node pool project, reused by Ansible, and could be reused really by anybody else. That's where the Shade library was born.

It's doing a lot more than I hoped. At some point in the future, it'd be great if Shade went away, right. It's filling a need at the moment that I'm a little bit unhappy that it has to be filled, but it's a temporal need and it's helpful today.

We're rolling out patches currently to infra's node pool to port it to the Shade library. We've got Shade doing integration testing in the OpenStack gate, so it's all based around that idea that there are these resources. I want to get a server for my cloud. I want to get a ... I want to upload an image.

There's basic building blocks you want to deal with. Now, you don't necessarily always want to deal with the APIs because you don't necessarily always want to know that this cloud used Neutron and this cloud uses VBA network. You just want a server that can't talk to the network, like that's all you ... in the basic case.

If you want the advanced things, this is where the vendor things come in. If you want to do some really advanced software to find networking routing things, you absolutely ... you're going to have to directly use the Neutron APIs. You're not ... I'm not going to have a simple one-button click workaround for you. It's not going to work out.

That's always going to be there and available, and be the flexibility for people who need those, but for the basic eighty percent things- I want a server, I want an image, I want to boot this image on this server, I want a volume ... those are just normal concepts.

We tried to embody those in the Shade library basically to drive the needs that we have in infra, and that ... also that we're seeing in the Ansible modules. If that winds up to be useful for some other programs, or for some other consumers, that's fantastic. That will be really great if we will be helpful. [Crosstalk 00:24:17]

Niki Acosta:               Is that a service provider play now that maybe might be good for end-users at some point?

Monty Taylor:            I think it's really more a ... At the moment, it's really more just a developer library piece. It's a ... so, rather than using the python-novaclient directly, you can use Shade and it'll use python-novaclient better. It will do the right calls for you based on which cloud you're talking to.

Trying to abstract the way, rather than trying to get each of the clouds that exist to agree on how they're going to deploy thing, because I do think there's validity in people making different choices. I'm actually very happy that Rackspace has some servers that give me direct IPs, that's fantastic. It's a great difference to exist, but for most of my programming I don't care about it.

Rather than trying to get everybody to homogenize onto one model of deployment, we can actually mitigate that in some ... in helper libraries, and some helper logic. To express to people, to make it easy to understand, "Oh, so you're on HP cloud. Okay, you need to do three things to get that thing." "Oh, you're on Rackspace? Oh, you need to do these three things over here."

They're slightly different but they express the same concepts. Maybe one day, we get to the point where that gets back ported into some API for OpenStack itself. Maybe that doesn't make any sense, because maybe that is just one of those, "There's four competing standards. Let's solve it by having a fifth," adding a fifth standard to the pool. I don't need to necessarily solved that. This is one of the things that we do a lot at [inaudible 00:25:53].

We're solving our immediate problems for ourselves right now. Trying to do that in a way that could be useful to somebody else but not trying to spend too much time thinking about what might solve the world for everybody. We're solving our problems, and if our solutions are helpful, neat. It they're not, that's okay because they are solving our problems. Oh, wow, that's ...

Niki Acosta:               Yeah, just ignore that chat. Sorry.

Monty Taylor:            [Crosstalk 00:26:19] That's very fancy.

Niki Acosta:               Sorry.

Monty Taylor:            All of your stuff you guys are doing here. All these images, and talking, and ...

Niki Acosta:               Don't scream crazy, so it's like, "Jeff, ask the next question," so I can unmute him. I think he's in over his head. That bitch! Sorry, carry on.

Monty Taylor:            No, I'm more fascinated by the dog and the maid now.

Niki Acosta:               (Laughs) You know what's cool? It's like you guys are doing this Shade thing ... I think this is where it's really cool to see the community at work. You are talking about, before the show started, how you're kind of bummed that the design summit is separated from the conference.

You don't get an opportunity to mingle because you're buried in a room, geeking out with other technical contributors. There's probably thousands of examples of people doing things like this, that is ...

Monty Taylor:            Yeah.

Niki Acosta:               Some will make it, some won't, like the community will decide. There's some really smart people solving some really interesting problems.

Monty Taylor:            Yeah.

Niki Acosta:               It's really cool to get glimpse into that from a technical Board Member's point of view.

Monty Taylor:            Well, this is actually one of the reasons that I've been a pretty strong proponent of all the Big Tent stuff which people may or may not be pleased about. If you're not, that's okay.

Niki Acosta:               Can you describe Big Tent for our viewers who are not familiar with Big Tent?

Monty Taylor:            Yeah, I can. The TC made some changes to how we look at bringing projects into the fold. We refer to them as Big Tent largely, I think, because it was the title of a blog post that I wrote on the topic. Something about big tents and cats. The problem is that there is two different ... there were two different problems that were intermingled.

There's a social aspect, who is OpenStack? Who are we? What are we doing as a set of people? Then, there's the other thing, which is the thing the DefCore has been focused on, which is what is OpenStack? What is the thing we release, and hand to somebody, and put the label of OpenStack on it and say, "

tag-icon Tags chauds:

Copyright © 2014-2024 Hi-Network.com | HAILIAN TECHNOLOGY CO., LIMITED | All Rights Reserved.