Distributed Data and Nova

Last year I wrote about the issues I saw with the design of the Nova Scheduler, and put forth a few proposals that I felt would address those issues. I’m not going to rehash them in depth here, but summarize instead:

  • The choice of having the state of compute nodes copied back to the scheduler over RPC was the source of the raciness observed when more than one scheduler was running. It would be better to have a database be the single source of truth.
  • The scheduler was created specifically for selecting hosts based on basic characteristics of VMs: RAM, disk, and VCPU. The growth of virtualization, though, has meant that we now need to select based on myriad other qualities of a host, and those don’t fit into the original ‘flavor’-based design. We could address that by creating Resource classes that encapsulated the knowledge of a resource’s characteristics, and which also “knew” how to both write the state of that resource to the database, and generate the query for selecting that resource from the database.
  • Nova spends an awful lot of effort trying to move state around, and to be honest, it doesn’t do it all that well. Instead of trying to re-invent a distributed data store, it should use something that is designed to do it, and which does it better than anything we could come up with.

But I’m pleased to report that some progress has been made, although not exactly in the manner that I believe will solve the issues long-term. True, there are now Resource classes that encapsulate the differences between different resources, but because the solution assumed that an SQL database was the only option, the classes reflect an inflexible structure that SQL demands. The process of squeezing all these different types of things into a rigid structure was brilliantly done, though, so it will most likely do just what is needed. But there is a glaring hole: the lack of a distributed data system. Until that issue is addressed, Nova developers will spend an inordinate amount of time trying to create one, and working around the limitations of an incomplete solution to this problem. Reading Chris Dent’s blog post on generic resource pools made this problem glaringly apparent to me: instead of a single, distributed data store, we are now making several separate databases: one in the API layer for data that applies across the cells, and a separate cell database for data that is just in that cell. And given that design choice, Chris is thinking about having a scheduler whose design mirrors that choice. This is simply adding complexity to deal with the complexity that has been added at another layer. Tracking the state of the cloud will now require knowing what bit of data is in which database, and I can guarantee you that as we move forward, this separation will be constantly changing as we run into situations where the piece of data we need is in the wrong place.

When I wrote last year, in the blog posts and subsequent mailing list discussions, I think the fatal mistake that I made was offering a solution instead of just outlining the problem. If I had limited it to “we need a distributed data store”, instead of “we need a distributed data store like Apache Cassandra“, I think much of the negative reaction could have been avoided. There are several such products out there, and who knows? Maybe one of them would be a much better solution than Cassandra. I only knew that I had gotten a proof-of-concept working with Cassandra, so I wanted to let everyone know that it was indeed possible. I was hoping that others would then present their preferred solution, and we could run a series of tests to evaluate them. And while several people did start discussing their ideas, the majority of the community heard ‘Cassandra’, which made them think ‘Java’, which soured the entire proposal in their minds.

So forget about Cassandra. It’s not the important thing. But please consider some distributed database for Nova instead of the current design. What does that design buy us, anyway? Failure isolation? So that if a cell goes down or is cut off from the internet, the rest can still continue? That’s exactly what distributed databases are designed to handle. Scalability? I doubt you could get much more scalable than Cassandra, which is used to run, among other things, Netflix and the Apple App Store. I’m sure that other distributed DBs scale as well or better than MySQL. And with a distributed DB, you can then drop the notion of a separate API database and separate cell databases that all have to coordinate with each other to get the information they need, and you can avoid the endless discussions about, say, whether the RequestSpec (the data representing a request to build a VM) belongs in the API layer (since it was received there) or in the cell DB (since that’s where the instance associated with it lives). The data is in the database. Write to it. Query it. Stop making things more complicated than they need to be.

Bias is Bias, Inadvertant or Not

I recently read this tweet storm by Matt Joseph (@_mattjoseph) that made me think. Go ahead, read it first. Read all 30 of his tweets so that you understand his point.

Whether you like to admit it or not, bias is real, and the targets of negative bias end up having to work much, much harder to overcome that bias than those for whom the bias is positive. Want an example? In the classical music world, musicians would audition to fill openings in an orchestra. For such auditions the musical director and possibly one or two other senior musicians who would act as judges. They would listen to each candidate perform a piece of music so that their musical abilities could be rated, and the highest rated musicians would get the job. Pretty straightforward. Traditionally (that is, through 1970) women only made up 5% or so of most orchestras. Now it can be assumed that a musical director would want the best musicians in their orchestra, so they would not have a reason to select mostly men if women played as well. So it was commonly assumed that playing music was both artistic and athletic, and that this athletic component that gave men the edge.

However, starting in the 1970s, auditions were switched to be done blindly: the musicians performed behind a screen, and the judges only had a number to refer to them.

blind_auditionCredit: old.post-gazette.com

It should not shock you that with this change, the percentage of women in orchestras began climbing, reaching 20% by the 1990s. Given the low turnover of orchestras, this is a huge difference! There are only 2 possible explanations for such a rapid, radical change. One is that women were suddenly getting better at playing music, though there is no evidence of any additional intense training programs for female musicians at that time.

So the second, and obvious, explanation is that prior to the blind auditions, the bias of the judges influenced what they heard, and as a result, women would be scored lower. Put another way, for a woman to make it into an orchestra, she had to be much more talented than a man in order to overcome that bias and get a similar score.

That, in essence, is the point Matt was making about the state of funding for tech companies: people of color, like him,

“…had to overcome things that others in the exact same position didn’t have to. That means with equal conditions, we’d be much further.”

The flip side to this is that, given two people of equal talent, you can expect that the person subjected to these kinds of negative biases will have less to show, in terms of any measures that may be used as “objective” criteria. This includes things like grades and SAT scores for kids applying to colleges. The attempt to correct for this bias is commonly referred to as “Affirmative Action”. If you recognize that bias exists, you understand why programs like this are important. Of course, it would be better to eliminate bias altogether, right? Yeah, and be sure to tell me when someone figures out how to do that. I don’t believe it’s possible, given the tribal nature in which humans evolved. This is why devices such as the blind audition are needed, and, if that’s not possible, applying a corrective factor to compensate.

Still not convinced that steps like Affirmative Action are correct? Then please explain why minorities such as blacks and Latinos score lower on average than whites. I see only two explanations: 1) they face many more hurdles in the education system, such as poorer facilities and support systems, that prevent them from progressing as strongly, or 2) they are inherently not as smart as whites. I’m sure that if you thought that option 2 is even possible, you wouldn’t be the type of person inclined to read this far. The proof is in the stats: if a group makes up N% of the population overall, but less than N% in some selected group, you’d better be able to identify an objective reason for this difference, or you’ve got to assume bias is influencing these numbers. And it isn’t something to be ashamed of or try to deny: we all have biases that we aren’t aware of, so it simply makes sense to admit that this is the case, and try to find a way to address it to make things level.

And don’t for a moment think that this is an altruistic, touchy-feely thing to help assuage white guilt. It means that talented people who were previously overlooked will now have a better chance of contributing, making things better for all. Why wouldn’t you want the best people working for you?

Moving Forward (carefully)

It’s a classic problem in software development: how to change a system to make it better without breaking existing deployments. That’s the battle that comes up regularly in the OpenStack ecosystem, and there aren’t any simple answers.

On the one hand, you’ve released software that has a defined interface: if you call a particular API method with certain values, you expect a particular result. If one day making that exact same call has a different result, users will be angry, and rightfully so.

On the other hand, nobody ever releases perfect software. Maybe the call described above works, but does so in a very unintuitive way, and confuses a lot of new users, causing them a great deal of frustration. Or maybe a very similar call gives a wildly different result, surprising users who didn’t expect it. We could just leave them as is, but that isn’t a great option. The idea of iterative software is to constantly make things better with each release.

Enter microversions: a controlled, opt-in approach to revising the API. If this is a new concept, read Sean Dague’s excellent summary of microversions. The concept is simple enough: the API won’t ever change, unless you explicitly ask it to. Let’s take the example of an inconsistent API call that we want to make consistent with other similar calls: we make the change, bump the microversion (let’s call this microversion number 36, just for example), and we’re done! Existing code that relies on the old behavior continues to work, but anyone who wants to take advantage of the improved API just has to specify that they want to use microversion 36 or later in their request header, and they get the new behavior. Done! What could be simpler?

Well, there are potential problems. Let’s continue with the example above, and assume that later on some really cool new feature is added to the API. Let’s assume that this is added in microversion 42. A user who might want to use this new feature sets their headers to request microversion 42, but now they may have a problem if other code still expects the inconsistent call that existed in pre-36 versions of the API. In other words, moving to a new microversion to get one specific change requires that you also accept all of the changes that were added before that one!

In my opinion, that is a very small price to pay. Each microversion change has to be documented with a release note explaining the change, so before you jump into microversion 42, you have ample opportunity to learn what has changed in microversions 2-41, too. We really can’t spend too much mental effort on protecting the people who can’t be bothered to read the release notes, as the developers and reviewers have gone to great lengths to make sure that these changes are completely visible to anyone who cares to make the effort. We can’t assume that the way we did something years ago is going to work optimally forever; we need to be able to evolve the API as computing in general evolves, too. Static is just another word for ‘dead’ in this business. So let’s continue to provide a sane, controlled path forward for our users, and yes, it will take a little effort on their part, too. That’s perfectly OK.

Behavior Modification

Over the weekend I saw a retweet from my friend Niki Acosta (@nikiacosta) which stated:

Destroy the idea that men should respect women because we are their daughters, mothers, and sisters. Reinforce the idea that men should respect women because we are people.

While I certainly agree with the latter notion, I don’t think that the former is very wise. We have a problem with men who treat women as nothing more than objects, and that translates into all kinds of hostile and dangerous behavior. First and foremost should be reducing the amount, and therefore the number of victims, of that behavior. So what is really needed is a way to modify their behavior; after that’s done we can think about enlightenment of their backwards minds, but until then, that’s a far-off luxury.

Men who exhibit these behaviors in general do not see women as people, so trying to appeal to them on this will have no effect. These men are brought up in environments where women are not seen as equal. Most come from the world of “traditional” marriage, where a woman was property to be exchanged among men in different families. They exist for men’s sexual pleasure, to bear offspring, and to do the “women’s work” of the home. In that world, women are servants. The notion that a woman is just as much a person as they are, and deserves equal respect, would seem ludicrous to them. But it is likely that they have developed some bonds with female members of their family, and so they can understand that if someone were to disrespect their mother, or their sister, they would feel that that action was wrong, and it’s possible that they might make the relatively small mental leap to seeing that the “objects” they want are indeed someone else’s mother or sister or daughter. It might cause them to think twice about acting on their thoughts.

As the saying goes, Perfect is the Enemy of the Good. It would be absolutely wonderful if we could raise the social awareness of everyone so that people treat each other well simply because of our respective personhoods, but if you strive for that, you’ll miss opportunities to make some incremental changes in the world. Let’s focus on improving the behavior of these problematic men before we worry about raising their level of consciousness.

Creating a Small-Scale Cassandra Cluster

My last post started a discussion about various possible ways to improve the Nova Scheduler, so I thought that I’d start putting together a proof-of-concept for the solution I proposed, namely, using Cassandra as the data store and replication mechanism. But there’s a problem: not everyone has the experience to set up Cassandra, so I thought I’d show you what I did. I’m using 3 small cloud instances on Digital Ocean, but you could set this up with some local VMs, too.

We’ll create 3 512MB droplets (that’s their term for VMs). The 512MB size is the smallest they offer (hey, this is POC, not production!). I named mine ‘cass0’, ‘cass1’, and ‘cass2’. Choose a region near you, and in the “Select Image” section, click on the “Applications” tab. In the lower right side of the various options, you should see one for Docker (as of this writing, it’s “Docker 1.8.3 on 14.04”). Select that, and then below that select the “Private Networking” option; this will allow your Cassandra nodes to communicate more efficiently with each other. Add your SSH key, and go! In about a minute the instances should be ready, so click on their name to get to the instance information page. Click the word ‘Settings’ along the left side of the page, and you will see both the public and private IP addresses for that instance. Record those, as we’ll need them in a bit. I’ll refer to them as $IP_PRIVn for the instance cass(n); e.g., $IP_PRIV2 is the private IP address for cass2.

If you are using something other than Digital Ocean, such as Virtual Box or Rackspace or anything else, and you don’t have access to an image with Docker pre-installed, you’ll have to install it using either sudo apt-get install docker-engine or sudo yum install docker-engine.

Once the droplets are running, ssh into them (I use cssh to make this easier), and run the usual apt-get updates to pull all the security fixes. Reboot. Reconnect to each droplet, and then grab the latest Cassandra image for Docker by running: docker pull cassandra:latest. [EDIT – I realized that without using volumes, restarting the node would lose all the data. So here are the corrected steps.] Then you’ll create directories to use for Cassandra’s data and logs:

mkdir data
mkdir log

To set up your Cassandra cluster, first ssh into the cass0 instance. Then run the following to create your container:

docker run --name node0 
    -v data:/var/lib/cassandra 
    -v log:/var/log/cassandra 
    -e CASSANDRA_BROADCAST_ADDRESS=$IP_PRIV0 
    -p 9042:9042 -p 7000:7000 
    -d cassandra:latest

If you’re not familiar with Docker, what this does is create a container with the name ‘node0’ from the image cassandra:latest. It creates two volumes (the sections beginning with the -v parameter: the first maps the local ‘data’ directory to the container’s ‘/var/lib/cassandra’ directory (where Cassandra stores its data), and the second maps the local ‘log’ directory to where Cassandra would normally write its logs. It passes in the private IP address in environment variable CASSANDRA_BROADCAST_ADDRESS; in Cassandra, the broadcast address is what that node should use to communicate. It also opens 2 ports: 9042 (the CQL query port) and 7000 (for intra-cluster communication). Now run docker ps -a to verify that the container is up and running.

For the other two nodes, you do something similar, but you also specify the CASSANDRA_SEEDS parameter to tell them how to join the cluster; this is the private IP address of the first node you just created. On cass1, run:

docker run --name node1 
    -v data:/var/lib/cassandra 
    -v log:/var/log/cassandra 
    -e CASSANDRA_BROADCAST_ADDRESS=$IP_PRIV1 
    -e CASSANDRA_SEEDS=$IP_PRIV0 
    -p 9042:9042 -p 7000:7000 
    -d cassandra:latest

Then on cass2 run:

docker run --name node2 
    -v data:/var/lib/cassandra
    -v log:/var/log/cassandra 
    -e CASSANDRA_BROADCAST_ADDRESS=$IP_PRIV2 
    -e CASSANDRA_SEEDS=$IP_PRIV0 
    -p 9042:9042 -p 7000:7000 
    -d cassandra:latest

That’s it! You have a working 3-node Cassandra cluster. Now you can start playing around with it for your tests. I’m using the Python library for Cassandra to connect to my cluster, which you can install with pip install cassandra-driver. But working with that is the subject for another post in the future!