Virtual Bike Sheds

Recently we’ve been doing a lot of work to revamp how the Nova Scheduler service manages the resources that are being requested in the cloud. The original design was very compute-centric, as the only thing we originally designed for was finding host machines that had enough CPU, disk, and RAM for the requested virtual machine. That design has been far too limiting, so in the past year we began making things simpler and more generic with the concept of Resource Providers. A resource provider is any entity that had something that could be shared in a virtual environment. Besides physical compute hosts, this would also handle shared storage, network resources, block storage, and anything else that could be virtualized. Those things that are being provided would be referred to as Resource Classes, and the amounts of each of those would be represented as integer amounts, making comparison simple (previously there were many complicated conditional code structures that were necessary to compare different types of things under the old model). These amounts are referred to as Inventory, and the consumed amounts of inventory are referred to as Allocations. Determining the available amount that a provider has of a particular resource class is a simple matter of subtracting the allocations from the inventory. This assumes, of course, that all of the inventory for a particular resource class is identical and interchangeable. (hint: they might not be!)

So far, everything seems straightforward enough. This model is designed to only address the quantitative aspect of resources; qualitative aspects are represented by boolean traits that can be assigned to resource providers (and only to resource providers). The classic example was different compute hosts that disk space available, where some was SSD and others were slower spinning disks. The disk space was all storage, measured in GB and treated equivalently. It was only the providers that were different, as distinguished by their differing traits.

However, once we began to consider more complex resources, things didn’t fit as well. SR-IOV devices, for example, allow their virtual functions (VFs) to be shared by virtual machines running on the host with the SR-IOV device. It is these VFs that are the actual resources provisioned to the virtual machines. Each compute node can also have multiple devices available, and they can be (and usually are) attached to different networks. So if we assume two devices that each offer 8 VFs, our typical model would have an inventory of 16 VFs for that resource provider.

It’s clear, though, that those 16 VFs are not interchangeable. A VM needs a VF attached to a particular network, and so we need to tell those two groups of VFs apart. The current solution being put forward tries to solve this by introducing a hierarchy of resource providers in a parent-child relationship, referred to as nested resource providers. In this approach, the compute host is the parent resource provider, with two child resource providers (the two SR-IOV devices). Each of those would have an inventory of 8 VFs, and we would distinguish them by assigning different traits to the child resource providers. While this approach does work, in my opinion it’s an unnecessary complication that is more of a workaround for two incorrect assumptions: that all inventory for a particular resource class is identical, and that traits describe resource providers.

The reason for this disconnect was that the original design of the resource provider/class model was too simple. It was based on a relation between the compute node and the inventory it controlled being flat, so that we could assign traits *of the inventory* to its provider, and it all worked. Think about it: is SSD vs.spinning disk really a trait of the compute node? Or is it a trait of the storage system? The iMac I have for our family has both SSD and spinning disk storage. If it were a compute node, what would its trait be set to? Clearly, saying that the storage type is a trait of the compute node is not correct. It is this error that requires the sort of complex workarounds such as nested resource providers.

So what it the alternative? I see two; there may be more. The first would be to make a separate ResourceClass for each type of resource. This has the advantage of preserving the notion that all inventory for a given resource class is interchangeable. In the SR-IOV case, there would be two classes of VFs (one for each network connection type), and the request to build a VM would specify which network the VF requires. Unfortunately, there are some who resist the idea of multiple resource classes for similar things; I believe that it’s an unfortunate result of naming them ‘classes’, since most of us who are experienced in OOP see that as bad class design. If they had been named ‘ResourceTypes’ instead, I doubt there would be as much resistance. The second approach doesn’t add more resource classes; instead, it would assign traits to the ResourceClass to distinguish among their respective inventories. While this may more accurately model the real world, it would require some changes to the inner workings of the placement engine, which assumes that all the inventory for a particular ResourceClass is interchangeable; it would now have to be class+traits that would be unique. It would also require extra calls to the traits API to find the right ResourceClass. That just seems like a lot of complication just to avoid making separate ResourceClasses.

Let’s imagine another example: Bike Shed As A Service! Our cloud provides virtual bike sheds using a Bike Shed ResourceProvider that can provide bike sheds on demand. There are a total of 32 bike sheds: 8 blue, 8 green, and 16 red (because red is the best color, obviously!). What would be the most practical way of representing them in the ResourceProvider framework? Can we really say that all the bike sheds are identical? Of course not! There is no way that a blue shed is anything like a prized red shed! So when I request my bike shed, of course I will specify “red bike shed”, not just any old shed.

The correct way to represent such a situation is to have a Bike Shed ResourceProvider, and it has 3 ResourceClasses: RedBikeShed, BlueBikeShed, and GreenBikeShed, each of which has an inventory of 16, 8, and 8 sheds, respectively. Contrast this with the nested resource provider proposal, which would have: A BikeShed ResourceProvider, with three child ResourceProviders, with traits of ‘red’, ‘blue’, and ‘green’ respectively, and each of which has separate inventories as above. Besides the inefficiency of the SQL joins required to query such a design, it really doesn’t reflect reality. There isn’t any such intermediary ‘provider’; it’s just an artifact of the workaround for an incorrect model.

To get back to the real-world SR-IOV example, it’s clear that the inventory of VFs for each device are not interchangeable, so therefore they belong to separate resource classes. We can bike shed on how to best name them (see what I did there?), but the end result would be an inventory of 8 VFs on network 1, and 8 VFs on network 2.

I know that the Bike Shed example is a very simple one, but one designed to show the problems with the nested approach. Let’s make sure that we aren’t digging ourselves into a design hole that will make things hard to work with as the placement engine design grows to incorporate all sorts of resources. Perhaps there may be a case that can only be solved with the nested approach, but I haven’t seen it yet.

Ride to the River 2016

The Valero Ride to the River is a two-day cycling event to raise money for research for a cure for Multiple Sclerosis. This was the third time I’ve ridden in it, but what made this year different is that this is the first time that Mother Nature didn’t completely wash out one of the days. We had gorgeous weather, with temperatures cool in the morning, and only climbing to the low 80sF (around 25-27C) in the afternoon.

Starting off on Day 1
Starting off on Day 1 (that’s me in the bright green)


The ride starts in San Antonio, and wanders east and north until it reaches New Braunfels. This route is about 71 miles, but near the end there is a choice: turn left, and finish your ride. Or, you can turn right, and go up the Guadalupe River for 15 miles, turn around, and return back, making the total ride 100 miles. As I had done a full century on my last ride, I didn’t feel the need to push myself to prove anything. I had told everyone that I was only doing the 71. But as the ride progressed, I continued to feel fresh. This was most likely due to the very mild weather: temperatures never rose very high, and there were enough clouds so that you weren’t baking in the sun the entire time. By the time I reached the lunch rest stop (50 miles in), I started thinking seriously about going for the century, but I told my wife I’d wait until the last rest stop before the decision point.

Lunch after 50 miles, Day 1
Lunch after 50 miles, Day 1


When I reached that stop, at around mile 65, I knew that I wanted to do the full century. I remembered the only other time that I did this course, and what a struggle those last 30 miles were, so I braced myself for the ride. I was very surprised to find that, while definitely an effort, it was nowhere near as exhausting as it had been the previous time. Either they smoothed the hills out, or I was in much better shape! 😉 So while I didn’t set any speed records, I finished the century much easier than my previous two. Here’s the record of my ride, thanks to the Runkeeper app.

Enjoying a well-deserved beer after completing the century!
Enjoying a well-deserved beer after completing the century!


My well-earned Century Rider armband.
My Century Rider armband.


The next day offered a choice of two looping routes: 61 miles or 38 miles through the Texas Hill Country. I had done the 61 mile route a couple of years ago, and remembered how grueling the hills were on that ride, so I chose to only do the 38. For comparison, the route for Day 1 was through areas to the east of San Antonio, which is relatively flat. I had about 4,300′ of total climb (43 ft/mile). This route took us to the northwest of New Braunfels, which is much hillier by far. The total climb was about 2,700′, or over 71 ft/mile! And as you can see from the graph below, most of that climb was in the first half of the ride. There isn’t much else to say about the Day 2 ride. The weather was once again perfect, and while the ride was difficult at times, it felt good overall. Here’s the Runkeeper summary for Day 2.img_1025

Of course, I can’t take all the credit. The ride was extremely well-organized by the MS Society, with well-staffed rest stops every 12-15 miles. They also arranged for police support for traffic management, so that riders didn’t get stuck (or struck!) at busy intersections. My belated apologies to the drivers who were made to wait while 2,000 riders passed through!

I also don’t think I would have been able to accomplish this without the loving support of my wife Linda, who gives me the motivation to stay healthy so that I can live a long life with her! Three years ago I thought it was a pretty amazing accomplishment to complete a century at age 55, but now to have done two centuries this year at age 58 is really more than I ever expected to achieve, and I have Linda to thank for that.


Pair Development

If you’ve worked on large open source projects, one of the difficulties is dividing the workload. The goal, of course, is to spread it out so that every developer has a workload that will keep them busy, and everyone is working in sync towards a common goal. This isn’t easy in practice, as there is no top-down authority to hand out assignments and keep everyone on track, as there is in a corporate development environment. It requires a good deal of communication among the members of the team, as well as a good deal of trust.

This problem was brought to light recently in the Nova community. The issue was with the subteam working on the scheduler/placement engine, of which I’m a member. During the Newton development cycle, there was a significant bottleneck due to the fact that one person, Chris Dent, was responsible for a large chunk of work in designing and coding the Placement API and underlying engine, while the rest of us could only help by doing reviews after the code was written. And this isn’t a new thing: during Mitaka, it was Jay Pipes who was the bottleneck with the development of the Resource Providers concept, and in Liberty, it was Sylvain Bauza with the huge amount of work he did to integrate the Request Spec into Nova. Don’t get me wrong: I’m not criticizing any of these people, as they all did great work. Rather, I am expressing frustration that they bore the brunt of the load, when it didn’t have to be that way. I think that it is time to try a different approach in Ocata.

I propose that we use Pair Development. No, not Pair Programming – that’s an entirely different thing. Pair Development is when each “chunk” of work is not undertaken by a single developer, but rather to two. They discuss the path they want to take ahead of time, and instead of splitting the work, they both work on the same patches at the same time. Wait, you say – won’t this slow things down? I don’t believe that it will, for several reasons. First, when discussing a design, having multiple sets of eyes will reduce the number of dead ends, in the same way that bugs are reduced in pair programming by having both developers review the code as it is being written. Second, when a reviewer finds an issue with a patch, either developer can make the fix. This is an even greater benefit if the two developers are in different, but overlapping, time zones.

We also have as evidence the week before the most recent Feature Freeze: the placement stuff needed to get in before FF, and so a whole group of us pulled together to make that happen. Having a diverse set of eyes uncovered several edge cases and inconsistencies in the code, and those were resolved pretty quickly. We used IRC mostly, but had a Google Hangout at least once a day to discuss any outstanding, unresolved matters, so that we would all be on the same page. So yeah, the time pressure helped instill a bit of urgency in us all, but I think that it was having all of us own the code, not just Chris, that made things happen as well as they did. I know that I was familiar with the code, having reviewed much of it before, but now that I had to change it and test it myself, my understanding grew much deeper. It’s amazing how deeper you understand something when you touch it instead of just look at it.

Another benefit of pair development is that it provides much more continuity when one of the developers takes some time off. Instead of the progress getting put on hold, the other member of the development pair can continue along. It will also help to have more than one person know the new code intimately, so that when a behavior surfaces that is not expected, we aren’t depending on a single person to figure out what’s going on.

So for Ocata, let’s figure out the tasks, and make sure that each has two people assigned to it. I will wager that come the end of the cycle, it will help us accomplish much more than we have in previous releases.

Changing WordPress Permalinks

When I started this blog a few years ago, I hadn’t used WordPress before, and went with the defaults pretty much everywhere. The one that bothered me, though, was the default format for permalinks: That’s just plain ugly. The problem, though, is somewhere along the line I messed up, and ended up with long, unwieldy permalinks like: I’ve been wanting to switch to something cleaner for a while, but I didn’t want to break all of the existing links that I’ve shared. So I kept the long format.

I finally got sick enough of looking at those terrible URIs and started searching to see if anyone had run into the same issue, and, as expected, I was not along. I found the Change Permalink Helper Wordpress plugin by Frank Bueltge, installed it, and I was done! Simple! That ugly URL above is now, but the old one still works.

Thanks, Frank, for a nifty little plugin that made my blogging life easier!

Is Swift OpenStack?

There has been some discussion recently on the OpenStack Technical Committee about adding Golang as a “supported” language within OpenStack. This arose because the Swift project had recently run into some serious performance issues, which they solved by re-writing the bottleneck process in Golang with much success. I’m not writing here to debate the merits of making OpenStack more polyglot (it’s no secret that I oppose that), but instead, I want to address the issue of Swift not behaving like the rest of OpenStack.

Doug Hellman summarized this feeling well, originally writing it in a pastebin, but then copying it into a review comment on the TC proposal. Essentially, it says that while Swift makes some efforts to do things the “OpenStack Way”, it doesn’t hesitate to follow its own preferences when it chooses to.

I believe that there is good reason for this, and I think that people either don’t know or forget a lot of the history of OpenStack when they discuss Swift. Here’s some background to clarify:

Back in the late ’00s, Rackspace had a budding public cloud business (note: I worked for Rackspace from 2008-2014). It had bought Slicehost, a company with a closed-source VPS system that it used as the basis for its Cloud Servers product, and had developed a proprietary object storage system called NAST (Not Another S Three: S3, get it?). They began hitting limits with NAST fairly soon – it was simply too slow. So it was decided to write a new system with scalability in mind that would perform orders of magnitude better than NAST; this was named ‘Swift’ (for obvious reasons). Swift was developed in-house as a proprietary software project. The development team was a small, close-knit group of guys who had known each other for years. I joined the Swift development team briefly in 2009, but as I was the only team member working remotely, I was at a significant disadvantage, and found it really difficult to contribute much. When I learned that Rackspace was forming a distributed team to rewrite the Cloud Servers software, which was also beginning to hit scalability limits, I switched to that team. For a while we focused on keeping the Slicehost code running while starting to discuss the architecture of the new system. Meanwhile the Swift team continued to make strong progress, releasing Swift into production in the spring of 2010, several months before OpenStack was announced.

At roughly the same time, the other main part of OpenStack, Nova, was being started by some developers working for NASA. It worked, but it was, shall we say, a little rough in spots, and lacked some very important features. But since Nova had a lot of the things that Rackspace was looking for, we started talking with NASA about working together, which led to the creation of OpenStack. So while Rackspace was a major contributor to Nova development back then, from the beginning we had to work with people from a wide variety of companies, and it was this interaction that formed the basis of the open development process that is now the hallmark of OpenStack. Most of the projects in OpenStack today grew out of Nova (Glance, Neutron, Cinder), or are built on top of Nova (Trove, Heat, Watcher). So when we talk about the “OpenStack Way”, it really is more accurately thought of as the “Nova” way, since Nova was only half of OpenStack. These two original halves of OpenStack were built very differently, and that is reflected in their different cultures. So I don’t find it surprising that Swift behaves very differently. And while many more people work on it now than just the original team from Rackspace, many of that original team are still developing Swift today.

I do find it somewhat strange that Swift is being criticized for having “resisted following so many other existing community policies related to consistency”. They are and always have been distinct from Nova, and that goes for the community that sprang up around Nova. It feels really odd to ignore that history, and sweep Swift’s contributions away, or disparage their team’s intentions, because they work differently. So while I oppose the addition of languages other than Python for non-web and non-shell programming, I also feel that we should let Swift be Swift and let them continue to be a distinct part of OpenStack. Requiring Swift to behave like Nova and its offspring is as odd a thought as requiring Nova et. al. to run their projects like Swift.

Out of the Closet

From the time he was an adolescent, Johnny was always aware that he was somehow different than others. His parents, teachers, ministers, and neighbors all told him things that he didn’t feel were correct. He had thoughts and feelings that were clearly considered evil by the society around him, but try as he might, those feelings never went away. So in public he pretended to be the way they expected him to be. He got pretty good at pretending; so good that no one had a clue as to his true nature. He dreamt of a day when he could stop pretending, and be who he really was.

At first he thought he must be the only one who had to keep such a secret. Sure, there were a few people like him who were open about who they were, but they were reviled among his family and friends, and he sure didn’t want to become an outcast. So he kept pretending.

A few years later things slowly started to improve for Johnny. Many people in the media, and even some popular politicians, began to talk about these things. Not openly, of course – that would never have worked. But they clearly hinted at it, using code words and loose word associations that were understood by their listeners, but which could always be publicly denied as having any subtext. He began to notice that others were responding to these signals. Lots of other people. He began to understand that he was far from being alone.

He also started to think that if people like him were to unite and work together, they could change the underlying culture of society. So he started meeting with other like-minded people. He began to become politically active, and supported those candidates who were clearly sympathetic to his view of the world. As more and more of these candidates for change were elected, he began to feel more confident that things were finally changing!

And now, after years of supporting candidates who spoke about these matters by using carefully-chosen code words, a new, fresh candidate has emerged who spoke openly about the things he always believed! Donald Trump didn’t bother with the polite code words; he said what he felt, and this was exactly what Johnny had been waiting for: someone who represented what those feelings.

For Johnny is a racist. He never liked blacks or Jews, and always thought gays were perverts and should be locked away. He wanted to send all the Mexicans back, and keep Muslims in their countries, where we could bomb the shit out of them. He doesn’t see anything wrong about the Confederate flag, except that people are being too “politically correct” about it. Oh, and the misogyny! He had always felt that only men should be leaders, since women were inferior. He wished that someday women just shut up about equality, and go back to their “traditional” roles of cooking, cleaning, and raising babies, while always submitting to his sexual desires.

Johnny still can’t say those things out loud in public, because he knows that he would be ostracized socially, and would probably lose his job if his boss knew. So he still pretends, but come November, he will ecstatically cast his vote for Trump. And despite polls showing that Trump has nearly no chance of winning, Trump will end up getting millions of votes from people like Johnny who are skilled at acting one way in public, but who secretly long for the days of segregation and male dominance.

Don’t kid yourselves into thinking that people like Johnny are rare. All you have to do is spend any time on the internet and they will use that anonymity to reveal themselves. They are much more common than you think, and if you get complacent reading polls that show Trump as wildly unpopular, you will be in for a shock when he continues to beat the pollsters. Because polls rely on people saying what they honestly think, and these racists may be ignorant, but they aren’t dumb. They will happily report to be shocked by what Trump says when asked publicly, while inwardly smiling and thinking “ah, one of us!”. Don’t fall into that trap. Treat him, and those who support him in the shadows, as the serious threat that they are.

The Second Century

No, I’m not talking about history – this is about my cycling ride on Saturday. I participated in the 2016 Tour de Cure San Antonio, and completed the 103-mile course. I’ve only ridden a century (a 100-mile ride) once before, and my attempts at doing another were thwarted twice: once, a year later, when the entire ride was washed out by heavy thunderstorms, and then again at last year’s Tour de Cure, when they closed the century course early due to thunderstorms.

Start of ride
Lining up for the start of the ride (at 7am)!


Well, this year’s ride had its share of thunderstorms, too, but fortunately they were at the end. The day started off overcast and threatening-looking, but nothing came of all those clouds. About 30 miles into the ride the sun burst through, and I was hoping that it would stick around for a while. However, we only got to enjoy the sunshine for an hour or so until the clouds returned. It kept looking darker and darker as the ride progressed, and then at the rest stop at mile 80 there were event officials warning that a little ways up the road it was already raining heavily. They had vehicles that would shuttle you and your bike to the finish line if you didn’t want to ride through the storm, but that wasn’t what I had set out to do. What’s a little water, anyway?

To be honest, I was feeling pretty drained after 80 miles. When you sweat while cycling, the breeze against you dries it quickly, so after a few hours it feels like a salty crust. My leg muscles also felt like they had begun to run out of energy. But I set out to continue the ride anyway, and sure enough, about a mile later the skies opened up. Within minutes I was soaked from my helmet to my shoes. Oddly enough, though, it was actually re-invigorating! And once you’re wet, more rain isn’t getting you any wetter, so I rode on. The loud cracks of thunder sounded great, like music for a film I was starring in. Yeah, it felt pretty dramatic!

So I made it to the finish. The first time I did a century I was struggling – hard. I wasn’t even running on fumes then; hell, I would have loved to have had some fumes at that point. I had to stop several times in that last 30 mile loop to regain enough strength to keep going. So completing that ride was a matter of sheer will power. This year it was different: sure, I was tired during the ride, and a bit stiff afterwards, but when I got within a few miles of the finish, I found another gear and sprinted my way in.

Crossing the finish line
Crossing the finish line after 103 miles!


I think that there were several differences this year. I had trained much better this time, so my legs were better able to keep going for the distance. It was also much cooler, with temperatures in the 70s (instead of around 90F). And the rain, while making some aspects uncomfortable, certainly helped to refresh me. Finally, the course this year didn’t have very many severe hills. It had lots of climb, but nothing compared to the earlier course, which featured several killer hills.

posing with medal
Posing with my medal after finishing the ride, soaking wet!


There are three sets of people I want to thank: first, the American Diabetes Association, for organizing this event and making it run so smoothly – you’re really doing great work! Second, to the members of the ProFox online community for generously donating to support me. Together we raised $500! And finally, of course, to my wonderful wife Linda, who encouraged me every step of the way, and even drove back home to get my water bottles that I had forgotten. Hey, it was 6 in the morning, and my brain hadn’t caffeinated enough yet!

Linda and Ed
Linda and I, just before the start of the ride

Mea Culpa and Clarification

With my recent posts I seem to have confused people, and instead of helping us all see a better solution, I’ve made things murkier. So mea culpa.

The confusion comes from mentioning two distinct and mostly unrelated problems in different posts: the issues with the current Nova Scheduler regarding resource modeling and scalability, and the problem with fragmented data in the Cells V2 design. Because I proposed Cassandra as a solution to the first, many assumed that I was promoting it as the cure-all for everything in Nova. That’s not the case, so let me start with the focus on the cells issue.

The design of Cells V2 has a globally-available database, and separate database instances in each cell. The rationale was that this limits the failure domain, so if a single cell’s DB (or any other local service) goes down, the rest of my butt will still operate normally. While this is a big advantage for the message queue, it comes at a high cost for data, as it will be difficult now to get a view of, say, a user’s resources across cells. Users don’t see (and can’t specify) the cell for their instance, so it is important to keep that global view. The response to my criticism was split between “yeah, that’s a bad idea” and “look, we can add this additional dependency and layer of complexity to fix it!”. The ROME approach to replacing MySQL with Redis was an interesting approach, but further discussion on the email list pointed to a much better choice (IMO): Vitess. Vitess would provide the failure isolation without having to fragment the data. So I would prefer to see everything moved to a single database, and if failure isolation and redundancy is important for the database, add a tool like Vitess to handle that. I don’t think that Cells V2 is a bad idea; quite the opposite is true. My only concern was the data design and the implications of that design on everything else in Nova.

Now to get back to the Scheduler, my proposal for Cassandra was based on two things: fast, reliable data availability without duplication and syncing, and the difficulty of modeling very different resource types in a single, inflexible relational design. Those were the biggest problems facing the Scheduler, and as the long-term plan is to separate the Scheduler into its own service so that it can support an even greater number of resource types, it seemed like settling on a static resource model now was going to lead to huge technical debt in the future. I had hoped to spur a discussion about that, and it certainly did. But let me make clear that I don’t think those arguments apply to Nova as a whole.

So again, mea culpa. Let’s keep the discussions going, because even though there has been some negative energy released in the process, the overall impact has been quite positive. I had never heard of Vitess before, and had no idea that it allowed YouTube to be able to use MySQL to handle the data loads it does. It’s exciting to see all these incredibly smart people with different technical backgrounds work together to come up with better and better solutions.

Fragmented Data

(This is a follow-up to my earlier post on Distributed Data)

One of the more interesting design sessions today at the OpenStack Design Summit was focused on Nova Cells V2, which is the effort to rework the way cells work in Nova. Briefly, cells are a mechanism for allowing separate independent deployments to work as a single cloud, primarily as a way to provide horizontal scalability. They also have other uses for operators, but that’s the main reason for them. And as separate deployments, they have their own API service, conductor service, message queue, and database. There are several advantages that this kind of independence offers, with failure isolation being one of the biggest. By this I mean that something goes wrong and a cell is unreachable, it doesn’t affect the performance of the remaining cells.

There are tradeoffs with any approach, and this one is no different. One glaring issue that came up at that session is that there is no simple way to get a global view of your cloud. The example that was discussed was the common case of listing all your instances, which would require querying each cell independently, aggregating the results, and then sorting the aggregated records. For small clouds this process is negligible, but as the size grows, so does the overhead and complexity. It is particularly problematic for something that requires multiple calls, like pagination. Let’s consider a site with thousands of instances spread across dozens of cells. Typically when querying a large list like that, the API will return the first few, and include a link for the next batch. With a fragmented database, this will require some form of centralized caching approach, or, if that’s not feasible or the cache is stale, re-running the same costly query, aggregation, and sorting process for each page of data requested. With that, any gain that might have been realized by separating the databases will be more than offset by a need for a way to efficiently recombine that data. This isn’t only a cost for more memory/CPU for the API service to handle the aggregation and caching, which will only need to be borne by the larger cloud operating companies. It is an ongoing cost of complexity to the developers and maintainers of the Nova codebase to handle this, and every new part of Nova will be similarly difficult to fit.

There are other places where this fragmented database design will cause complexity, such as having the Scheduler require a database connection to every cell, and then query every cell on each request, followed by aggregating the results… see the pattern? Splitting a database to improve performance, or sharding, only makes sense if you shard along a line that logically separates the data so that each shard can be queried efficiently. We’re not doing that in the design of cells.

It’s not too late. There is a project that makes minimal changes to the oslo.db driver to allow replacing the SQLAlchemy and MySQL database that underpins Nova with a distributed database (they used Redis, but it doesn’t depend on Redis). It should really be investigated further before we create a huge pile of technical and design debt by fragmenting the data in Nova.

OpenStack Ideas

I’ve written several blog posts about my ideas for improving OpenStack, with a particular emphasis on the Nova Scheduler. This week at the OpenStack Summit in Austin, there were two other proposals put forth. So at least I’m not the only one thinking about this stuff!

At the Tuesday keynote, Intel demonstrated a version of OpenStack that was completely re-written in Go. They demonstrated creating 10,000 containers and 5,000 VMs in under a minute. Pretty impressive, right? Well, yeah, except they gave no idea of what parts of Nova were supported, and what was left out. How were all those VMs scheduled? What sort of logging was done to help operators diagnose their sites? None of this was shown or even discussed. It didn’t seem to be a serious proposal for moving OpenStack forward; instead, it seemed that it was a demo with a lot of sizzle designed to simply wake up a dormant community, and make people think that Intel has the keys to our future. But for me, the question was always the same one I deal with when I’m thinking about these matters: how do you get from the current OpenStack to what they were showing? Something tells me that rather than being a path forward, this represents a brand-new project, with no way for existing deployments to migrate without starting all over. So yeah, kudos on the demo, but I didn’t see anything directly useful in it. Of course Go would be faster for concurrent tasks; that’s what the language was designed for!

The other project was presented by a team of researchers from Inria in France who are aiming to build a massively-distributed cloud with OpenStack. Instead of starting from scratch as Intel did, they instead created a driver for oslo.db that mimicked SQLAlchemy, and used Redis as the datastore. It’s ironic, since the first iteration of Nova used Redis, and it was felt back then that Redis wasn’t up to the task, so it was replaced by MySQL. (Side note: some of my first commits were for removing Redis from Nova!) And being researchers, they meticulously measured the performance, and when sites were distributed, over 80% of the queries performed better than with MySQL. This is an interesting project that I intend on following in the future, as it actually has a chance of ever becoming part of OpenStack, unlike the Intel project.

I still hold out hope that one day we can free ourselves of the constraints of having to fit all resources that OpenStack will ever have to deal with into a static SQL model, but until then, I’m happy with whatever incremental improvements we can make. It was obvious from this Summit that there are a lot of very smart people thinking about these issues, too, and that fills me with hope for the long-term health of OpenStack.