Inscrivez-vous maintenant pour un meilleur devis personnalisé!

Interview with Piston's Christopher MacGown

Sep, 02, 2024 Hi-network.com

With the big news today, it seems like a good time to repost this, from the Metacloud blog:

Piston Cloud co-founder and CTO Christopher MacGown has been involved with OpenStack since the very beginning. He was at Rackspace when they switched from Ruby to Python, he helped blueprint the original project, he pitched the idea to the technology community, and today he serves on the Foundation Board. So he's got a super-insider's perspective and he shared that with us on last week's OpenStack Podcast. Specifically, he touched on:

  • How watching "Hackers" 17 times impacted his career choice
  • His role in the early days of OpenStack
  • How Piston came to be
  • The importance of feature parity between modules
  • What the Foundation Board is talking about right now
  • Why Joshua McKenty left Piston
  • The need for greater stability before we move up the stack

To see who we're interviewing next, or to sign-up for the OpenStack Podcast, check out the show schedule! Interested in participating? Tweet us at @nextcast and @nikiacosta.

For a full transcript of the  interview, click read more below.

Jeff Dickey:                That is great. That is a perfect way to start the podcast. Hi, I'm Jeff Dickey from Redapt.

Niki Acosta:               Hi, I'm Niki Acosta from Cisco, and we have an awesome guest for us today. We are super honored. Do you want to introduce yourself, Christopher?

Christopher MacGown:       Hi, I'm Christopher MacGown, I'm the CTO and co-founder of Piston.

Niki Acosta:               Yeah! So we decided, based on Christopher's awesome bio on the Piston website and his affection for 16th and 17th century philosophy, that we would cue up something super fancy to start off today's show. Why not? It's a holiday week. I think most people are kind of mentally checking out at this point, but nonetheless, Christopher: you're back from vacation, yes?

Christopher MacGown:       Yes.

Niki Acosta:               And you're trying to hold it together for sake of the podcast today?

Christopher MacGown:       Yeah. The jet lag, I'm starting to get over it. I'm really glad we have a three-day week this week that holiday.

Jeff Dickey:                Yeah, it's nice coming back from vacation to a three-day week, isn't it?

Christopher MacGown:       Yeah, I was surprised yesterday. I went and came into the office and I'm looking at the calendar; everybody's on vacation, everybody's vacation and I'm like, wait? What's this Friday thing?

Niki Acosta:               So Christopher, let's go back a little ways. I know, like myself, that you were at Rackspace back in the early days of OpenStack, but before we get to that part, tell us how you got into tech.

Christopher MacGown:       So my dad was in the Marines and he did cryogenic aeronautics. Basically, he made liquid nitrogen and liquid oxygen. So from an early age, I've always been really interested in mechanical engineering and all of the technology around flying airplanes, but I never really wanted to fly airplanes because that was just a thing everybody did as far as I was aware or concerned with. So on my family's side, basically everybody's involved in tech in some way. My grandmother was a math teacher, my grandfather was a shop teacher. He built his own airplane. So I got into the idea, being a teenager-before I was even a teenager, I was really into Star Trek and all of the scifi stuff and I was like: I'm going to be a rocket scientist!

It's really embarrassing. In the late 90s, there was this really, really horrible movie called Hackers.

Niki Acosta:               I love that movie! "Hack the planet!"

Christopher MacGown:       It was really horrible and I watched it like 17 times and came away from it. I [inaudible 00:02:55] had a computer forever and wrote my own games and played games and everything and I didn't realize that there was more to that than just, oh, you're programing, but there's actually a whole other world out there. So we ended up getting the internet, we had that for a long time. So I got into tech that way: I watched that terrible movie and it sort of affected me.

Niki Acosta:               I have that movie on VHS and I'm not getting rid of it.

Jeff Dickey:                That's one of Angelina Jolie's first flicks.

Niki Acosta:               It was, "crash and burn." Yeah!

Christopher MacGown:       And Matthew Lilard. My dad was a member of a BBS, and I was born in Maine, and there are two groups of people from Maine. There's like the two nicknames and they don't really like to mesh with each other: the professional kind of people or the ones that are interacting with tourists a lot, they like to call themselves Mainers, but the rest of them call themselves Mainiacs. So on the BBS, that was my nickname, my handle for the longest time, so when I saw that movie I was like, oh my god! He has the same name as me. Then I realized, oh man, I just compared myself to Matthew Lilard and thankfully he's-

Niki Acosta:               Oh man. That brings back so many memories. So did you go to raves, too?

Christopher MacGown:       I was too young for that.

Niki Acosta:               So fast forward a little bit -cool story, by the way -I always wonder how much of people's inclination to join tech is due to nature versus nurture and it sounds like you have a little bit of both in there. But somehow, you're at Slicehost which got acquired by Rackspace, and you're actually at the first Austin summit, right? Tell us about that.

Christopher MacGown:       So I joined Slicehost in the middle of 2008, which is actually really late in Slicehost's life, right before Rackspace acquired us. I helped build the Slice Manager and what they called QB, which was the Slice control center which basically did all the virtual machine orchestration. Rackspace acquired us in the end of 2008 and I ended up joining Rackspace and leading the team that builds the Rackspace cloud Servers and helping to lead the team that built Rackspace cloud Servers Windows. As we were building that, we ran into scaling issues at Rackspace around how we're able to hire people, how we're able to build the technology and respond to the increasing demands on the part of our customers. So we started researching late 2009, early 2010 the idea of switching out the technology, rather than for Ruby, which was what Slicehost was written in, to switch to Python. So we started going down that path.

Rackspace wasn't at that time a software company, so they were researching, or starting to hire people from a lot of the open source community trying to figure out a way to take the things that we built and turn that into an open source project that everybody could consume and develop for, and Rackspace could focus on what Rackspace is really good at which is dealing with the underlying bullshit of hosting. It's the boring stuff people that people don't really want to worry about: plugging in servers, getting them up and running, that sort of thing. So in May of 2010, we started talking with the team at NASA, the NASA Nebula team. They were working on [ASU 00:06:43]. They had just launched that on Joshua, my co-founder's website and we brought them in and talked to them about our plans and they were really excited about it because NASA has that mandate to-I'm also wearing a NASA shirt today.

Niki Acosta:               Woohoo! Awesome.

Christopher MacGown:       They have a mandate to basically take technology and advance it and then get it out of the way so then the rest of humanity can build upon it. So Rackspace and NASA had gone back and forth a couple of months later. July 2010, we have this secret summit where we invited 50ish technologists and 50ish business guys from something like 25 to 50 companies to come out while we presented was going to become OpenStack. I was four of the twelve technical talks. I talked about the OpenStack API. If you're familiar with some of the vagaries around that, I am somewhat responsible. I've been spending the last four years trying to make up for that.

The existence of a death agent, which if you're familiar with OpenStack, there isn't one, and part of that is because of my talk; I ended up earning the nickname of the OpenStack community Root Kit, because in the middle of the talk, I said, "Here's the problem. We have this thing where we have to have," -because Rackspace has to be able to change IP addresses and some of the root passwords and stuff on guess VMs that don't have the same file system that's like a Linux file system or Windows VM, so we have to put this agent there. But unfortunately, from the perspective of your user, it's a root kit. That just killed the whole thing.

Then I gave two other technical talks and I really have no idea what they were. I think one of them was on the architecture of Nova and the other one, I really have no idea. So four of those technical talks, kicked it off. I was actually really skeptical about how people were going to react to OpenStack, whether or not it was going to become a thing, so I was really surprised how enthusiastic all the people were, especially when they were pushing back on, oh, we have to do Xen versus KVM versus this versus that and having all the companies that joined end up becoming members of the OpenStack Foundation was actually pretty awesome.

Niki Acosta:               So how did you decide to spin something out and do Piston?

Christopher MacGown:       So Joshua and I were both leaving our mutual employers at the time -Joshua was leaving NASA, I was leaving Rackspace -I'd been there for two years, I'm like, ah, I'm ready to go, and he moved to Italy to work on this World Bank Project called the Global Earthquake Model, and I'd been kicking around -I talked to the [inaudible 00:09:41] about potentially joining them and I talked to Alex Holdy who ran cloudkick at the time about potentially joining them and it was kind of awkward because both of those teams were in the process of getting acquired by Rackspace at the time, so that would have been a little bit awkward, but I ended up talking with Joshua and we had a mutual idea of what we wanted to do after whatever it was that was coming next.

Wanted to build this humanitarian [jewel 00:10:09] location cloud thing, so he said, "Hey, I've got a need for people right now to help kick off this global earthquake model team. They're an open source project that's mostly scientists. They don't understand building technology. They don't understand, at the time, how to work with open source projects. They don't understand SCRUM, they don't understand Agile, and they had written everything in Java and they need to be writing it in Python.

Why don't you come out and join? So I went out to live in Comeana, Italy for three months working on this project, helping them kick that off and building the very first V1.0 release. It wasn't actually 1.0, it was something like 0.98 and they actually released 1.0 like two years later, but we kicked that off, we helped the project start, made the first open source release and then we came back here and when we came back we saw everyone in the OpenStack community at the time -this was late 2010, early 2011 -had been focusing on the needs of service providers, and that wasn't very valuable to most of the people for whom cloud is actually a thing.

Service providers have the expectation of: everything is going to be metered, you can hire 50 people and if you amortize the cost of those 50 people over a thousand customers, it doesn't really matter. But for the purposes that we envisioned, we needed an OpenStack that would be able to be automated and basically, entirely turnkey, so that we could build this [Geode 00:11:52] thing that we thought were going to build. It turns out that the market, the ecosystem, wasn't actually focusing on that, so we ended up saying, OK, here's what we're going to build. We're going to build this cloud thing and once we get that solved we'll go back and we'll build our awesome thing on top of it.

So we raised money, started Piston in January of 2011, raised 4.5 million dollars over the next six months and then we spent the last four years building a cloud operating system that is basically entirely distributed, hyper-converged and hands-free, almost entirely and we never made it back to that cool thing we wanted to build because there's just so much of everything else that has come out of what we've been trying to build and we haven't even gotten there. We just very naively thought, oh, this is going to be three interns and in six months we'll have everything done and ready and we can go on and build the cool thing we wanted to build.

Niki Acosta:               Do you think OpenStack was ready by the time you started working on having a package distribution?

Christopher MacGown:       Parts of OpenStack were very, very ready. Unfortunately, those parts of OpenStack weren't at all integrated, so the OpenStack project as a whole wasn't ready. The Foundation didn't exist, so the ecosystem infrastructure that we have we take for granted now, just the testing phase, code review process, all the stability, the Foundation, the interaction between the board and the technical committee, and the technology itself, the actual projects weren't actually ready at the time either. We as a community rushed through the original implementation of Keystone, called it core at the Diablo release for the ethics summit, so Boston: time frame, late 2011. We had just dropped the Diablo release and we called from the perspective of the code, Keystone was part of core, but it actually didn't integrate with any of the services at all. So as a community, I think we dropped the ball a little bit for the first year of how we built things. We've come a long way of doing the unification of these projects and making them so they integrate better.

Niki Acosta:               You're talking about being skeptical that OpenStack would even-were you surprised? Are you surprised now with how far it's come?

Christopher MacGown:       I'm really surprised by how far it's come. Part of my initial skepticism was I didn't see a very large community growing up around it. Especially when you brought 50 companies in but most of them were all the same. They brought 50 technologists in, but most of them were from a handful of companies and most of them really wanted to sell giant servers. So we had Dell there originally, and I'm like, OK, I can see why Dell's here, they wanted some more servers, but there wasn't people like Swiftstack. Swiftstack didn't exist at the time and there was no one from that perspective who wanted to do anything cool with Swift. Now you have things like Swift, Swiftstack, but 0VM is this awesome thing that's built on top of Swift that no one thought would be possible at the time when we launched OpenStack, so the fact that all of these things have exploded, even though Swift was already an open source-ish project at the time -it'd been up there on Github for a while -but nobody was actually doing anything interesting with it, and nobody was treating it as this ecosystem component that they could actually build something else on.

Niki Acosta:               Jeff, I feel like I'm stealing the mic here. I'm sure you've got a question.

Jeff Dickey:                Yeah, I've got so many, I'm just trying to go through which ones here. One of the things that's been on my mind lately, because you're talking about Swift and you're talking about the maturity of OpenStack; where should the developers be focusing, you think, on the communities? Should they be working on features or stability?

Christopher MacGown:       They need to be working on stability. In a lot of the core projects, feature capability or just feature parity between drivers in, say, for instance, Neutron. Neutron doesn't have feature parity to know the network yet, and at the same time, all of the drivers that are underneath it, they don't have feature parity themselves. So it's basically a continually moving target internally, because they keep implementing these new features, and now that we've opened Nova network again -the open site community has, the Nova team has -that's going to be another moving target there where Nova's going to be inventing stable, stable and simple, which means it doesn't have a lot of features people think they need with a software defined network, but as we add features there than we're going to lose stability and also make it harder for Neutron to catch up.

Niki Acosta:               Are you guys using Neutron now in your distro?

Christopher MacGown:       We distribute Neutron and NovaNetwork. NovaNetwork is the default. If people want to use a self-modified networking product, we only support three. We support NSX, we support PlumGrid, and we support Open Contrail right now, but that one still-that's not one, with our integration, I wouldn't recommend people use currently.

Jeff Dickey:                What do you think about Mark Shuttleworth recently talked about splitting the modules up into core and common modules. What are your thoughts around that?

Christopher MacGown:       I think that is kind of what we're trying to do on the board. I also serve, on the board, for Piston's Gold Selector seat, and the Defcore process is trying to rationalize what 'core' is from the perspective of the trademark policy, and basically, currently, what core is according to the bi-laws of OpenStack is OpenStack Swift and OpenStack Nova and nothing else. So the existence of the incubated releases right now from a corporate standpoint or corporate development standpoint on the part of the Foundation has no real relevance to the things people are using, so Defcore, the process for the board, is to rationalize what-because we have like 200 projects now if you consider both the incubated and the technical integrated releases and also everything in Stackboard we have an overwhelmingly large number of projects, and getting our hands around that in a way that allows the technical committee and the board to work together to define what OpenStack is without just saying, oh, it's these two projects, will end up making that core/common/incubated/ecosystem [processing 00:19:43] happen, I think.

Jeff Dickey:                It's a lot of projects. I've heard some of the naysayers talk about there's more OpenStack projects than there are enterprise customers. It seems like there's a lot of projects. Should they be split more or should they be consolidated?

Christopher MacGown:       I think a large part of why there are so many projects is people see value in being the PTL of X or on core of some project and for the purposes of ecosystem integration, I think having lots of projects is great. I don't think they should all have the same meaning, though, so something like, for instance, [Xecar 00:20:33], is probably not as meaningful right now as Glance is, and I'm saying that as someone who would actually like Glance to completely disappear. I think from the perspective of the OpenStack community, it could be consumed by either Swift or Nova and no one would lose any sleep over it.

I was going to point out the window that Mark [Washingburger 00:21:01] who was the former PTL, now works at Piston, but he's not in the office yet so I can't actually do that.

Niki Acosta:               He's on vacation. Maybe.

Christopher MacGown:       Oh, he's on vacation?

Niki Acosta:               I don't know.

Christopher MacGown:       I should check my calendar.

Niki Acosta:               So what else is the board talking about? Obviously, a lot of these meetings and conversations aren't always made public, but what is the board scrambling to talk about now?

Christopher MacGown:       I think the major thing is winning the enterprise. I know that was brought up as a component during the last OpenStack summit, the one in Paris, that's the major focus for the OpenStack board for 2015 and I think the defining what Defcore is and getting that beyond justifying the capabilities and the requirements for OpenStack Havana, actually bringing that forward to support Kilo and Juno and Icehouse and Grizzly and all these past releases will be very valuable in explaining to the enterprise: here are the things you can do, here are the projects and products built on top of these projects that will work with these different releases at different times and we can do that in a way that's defined and do that in a way that's systematized so that you can say: OK, we have this logo that says, 'oh, I work with Kilo,' and that when you're selling to someone or you're just an open source project you can say, have that be defined, and people will be able to use that to build upon.

Niki Acosta:               Yeah, I know at Metacloud it was always hard to kind of pinpoint what release we were using, because we took components of releases from different projects. Is this an experience saying the same thing? Are you taking in features before they might be 100% ready to go and making them ready to go in your own way or are you waiting?

Christopher MacGown:       So we have done both. We've waited, we've pulled features forward, we've pulled projects forward. When we first launched our, working on this project we'd been internally calling Undercloud, it's basically separating the networking component for the cloud from the component that runs Neutron, for instance, and we had to pull features out of the Juno release of Neutron in order to support that even though internally in our distribution is actually an Icehouse distribution. In the past, most of these features that we've had to pull back are actually from Neutron.

We pulled a pre-released version of the OBS driver back when we were going from Fulsom into Grizzly, we had to pull a trunk version of a patch for OBS to allow it to do multi-host routing. So there are features that we end up building or pulling from the community before they've actually landed and then tried to backport them for figuring out what the idea of what they're trying to build is and then doing that and hopefully throwing that away after it when we [crosstalk 00:24:27] push forward.

Niki Acosta:               It completely amazes me how many people actually think they can do this on their own with little expertise. It comes back to that whole thing of thinking of OpenStack as a product or thinking of it as like a Linux Kernel, to put it in Randy Bias' words. I think we've seen a lot of enterprises try to go to the DIY route and completely fail for that very reason. Trying to assemble all the stuff and keep up with it is very difficult, right?

Christopher MacGown:       Yeah, it is. Especially as all of these features and all of these projects sort of expand and we get, oh, well, now Neutron isn't just networking, it's also firewalls as a service and load balancers as a service and all of these other things that maybe should either be their own project or value add from somewhere else and being able to figure out what the best case is. So Pistoncloud OS is a distributive system that builds cloud infrastructure. We deploy OpenStack on top of that, but it's a curated OpenStack. I don't to be distributing every program under the sun just because there's too much effort involved with keeping up with everything.

You can't really tell, I've got like 17 gray hairs here. [crosstalk 00:25:53] like counted them, and I think those 17 gray hairs are from trying to keep up with all the emails I'm getting, on all the different projects that are happening. Some of them I don't read -actually quite a few of them I don't read -but just trying to keep that up and being able to explain to people when I'm talking to other customers, or the media, or even people internally, what's going on with OpenStack, it's actually quite a lot of effort.

Jeff Dickey:                What is going on right now? What's exciting you the most?

Christopher MacGown:       So the thing that's exciting me the most about OpenStack are some of the networking projects that are growing up around it, not so much things that are happening inside of it. NFV is a major push from the perspective of the networking community and there have been some really interesting projects that have grown out of a reaction against some of the things that people have done historically with SDN. I'd really like Akanda that's just spun out of Dreamhost and then Project Calico is another SDN that does really simple [note 00:27:17] network on steroids type SDN that plugs into Neutron, and I'm really excited about those, and then also Stacktech, which is a Stackforge metering project that does metering in streams rather than in batch, so I'm really looking forward to seeing, as that develops, how that will be either replaced Celiometer in some redistributions of OpenStack or something that can actually feed data into Celiometer and then throw Celiometer out of the cloud entirely and have that be some application that runs somewhere else.

Niki Acosta:               So what's your secret sauce at Piston?

Christopher MacGown:       Hold on, I think we've renamed that stuff. What we've built is this technology that, internally we called Moxie, and we'd been calling it Moxie for a while -we'd been undergoing some brand name changes there -it is a distributed state machine built on top of an algorithm called Zab, so if you're familiar with Zookeeper, we use that as the backend Paxos algorithm to do Master Election and consistency between these state machines that run throughout the cloud. So we have a hyper-converged architecture for how we deploy things -that means your storage, your networking, and your computer running on all of the hosts -and then on top of that we have the management plane also entirely distributed.

So when you bring up a cloud, you plug in a USB key or some image and you install that on a server and our software goes out and automatically detects nodes that aren't even powered on, powers them on, and netboots them into a Linux distribution that basically is slimmed down and stack exception kernel hardened to launch and designed only to run OpenStack currently. Other things down the road. And we have our services basically load these state machines and then each node that comes up is able to act in concert with all the other nodes without having to communicate with a central controller or by themselves. So we can actually bring up things that should run everywhere, things like Nova.

The API server for Nova should run it on every host, it's not valuable to have it on just one, and we can bring those up anywhere without having to have any communication between the different hosts, but things that need to be running in one place like your master database for your cloud, it should be master elected, and that can run on any of the hosts using the distributed object store, Ceph, we use that as the underlying file system or the underlying object store and data store for most of the cloud services, the critical cloud services, so we can bring up your master database server, move that to another host if your host has any problems, so you can actually take a server, take a rack of servers, bring up a cloud on them and not have to worry if anything is running inside it. From an end-user's perspective, you just end up with OpenStack.

You get the APIs, you get the dashboard, and if something happens that you would have to normally manage, our software already deals with the orchestration of that and deals with failover and high availability and fault tolerance.

Niki Acosta:               So it sounds like you've solved for installation and it sounds like you've solved for redundancy. Assuming those things are common among other providers, why would someone choose Piston?

Christopher MacGown:       We didn't actually solve for installation, we solved for scaling. Our software is designed to be horizontally scalable and linearly scalable, so if you start off with five nodes in a cloud and you want 20% more capacity, you add one more node and then you get 18% more capacity after its grown but 20% before you end up with it, and the installation mechanism for that sixth node was the same as it was for the first five. So it automatically joins the cluster, it automatically joins the cloud, and is automatically a member of the cloud without having to have any human being go, "All right, we've got to install this software on it." It's just a thing.

We also have focused primarily on security needs and ease-of-use. So the idea is that anything or can be automated, should be automated. The idea that I've tried to get away from is that service provider sort of carrier model of, "Oh, we can add a whole team of 100 people to deal with this," when most companies aren't 100 people or don't have the expertise to put 100 people on this task. So I want to basically automate everything about handling the infrastructure and allow companies to actually build their value by building their value, not hiring a team of people dedicated to building clouds.

Niki Acosta:      &nb

tag-icon Tags chauds:

Copyright © 2014-2024 Hi-Network.com | HAILIAN TECHNOLOGY CO., LIMITED | All Rights Reserved.