API Stability Thoughts

Recently in the OpenStack API Working Group we have been spending a lot of time and energy on establishing the API Stability guidelines that will serve as the basis for the supports-api-stability tag proposed by the OpenStack Technical Committee. Tags are a way for consumers of OpenStack to get a better idea as to the state of the various projects, and this particular tag is intended to reassure consumers that the API for a project with this tag would not change in a breaking way. The problem with that is defining what exactly constitutes a “breaking change”.

While there are about as many opinions as there are participants in the discussion, they all roughly fall into one of two camps:

  1. A change that simply adds to the existing API, such as returning additional values in addition to the current ones, isn’t breaking stability, as existing clients will still receive all the information they expect, and will ignore the additional stuff.
  2. Any cloud that says it is running a particular version of an API should return the exact same information. In other words, a client written for Cloud A will work without modification with Cloud B. If something changes that would make these responses different, that change must be reflected in a new version, and the old version should remain available for a “long time” (precisely how long a “long time” is is a completely separate discussion in itself!).

I wrote about the second point above in an earlier post, which attempted to summarize that position after some discussion with many in the community who were pushing cloud interoperability (or “interop”). And at the recent Atlanta PTG (which I recapped here), we discussed this issue at length. The problem was that those who fell into Camp #1 above were at the morning session, while Camp #2 was there in the afternoon. So while the discussions were fruitful, they were not decisive. The discussions and comments on the Gerrit review for the proposed change to the API Stability Guidelines since the PTG reflect this division of opinion and lack of resolution.

But today during discussions in the API-WG meeting on IRC, it dawned on me that there is a fundamental reason we can’t reconcile these two points of view: we’re talking about 2 different goals. Camp #1 is concerned with not breaking clients whose applications rely on an OpenStack service’s API, while Camp #2 is concerned with not having different cloud deployments vary from each other.

The latter goal, while admirable, is very difficult to achieve in practice for anything but the most basic stuff. For one thing, any service that uses extensions will almost certainly fail, because there is no way to guarantee that deployments will always install and run the same extensions – that’s sort of the point of extensibility, after all. And during the discussions at the PTG, we tried to identify versioning systems that could meet the interop requirements, and the only one anyone could describe was microversions. So that means to satisfy Camp #2, a service would have to use microversions, period.

So I propose a slightly different route forward: let’s define 2 tags to reflect these two different types of “stability”. Let’s use the original tag “assert:supports-api-compatibility” to mean the Camp #2 standard, as its emphasis is interoperability. Then add a separate “assert:supports-api-stability”, which reflects the Camp #1 understanding of never breaking clients.

It is important to note that this second tag is not meant to indicate a “light” version of the first, just because the requirements wouldn’t be as difficult to attain. It reflects support for a different, but still important, continuity for their users. Each project can decide which of these goals are relevant to it, and will make their APIs better by achieving either (or both!) goals.

Atlanta PTG Reflections

Last week was the first-ever OpenStack PTG (Project Teams Gathering), held in Atlanta, Georgia. Let’s start with the obvious: the name is terrible, which made it very hard to explain to people (read: management at your job) what it was supposed to be, and why it was important. “The Summit” and “The Midcycle” were both much better in that regard. Yes, there was plenty of material available on the website, but a catchier name would have helped.

But with that said, it was probably one of the most productive weeks I’ve had as a OpenStack developer. In previous gatherings there were always things that were in the way. The Summits were too “noisy”, with all the distractions of keynotes, marketplace, presentations, and business /marketing people all over the place. The midcycles were much more focused on developer issues, but since they were usually single-team events, that meant very little cross-project interaction. The PTG represented the best of both without their downsides. While I always enjoyed Summits, there was a bunch of stuff always going on that distracted from being able to focus on our work.

The first two days were devoted to cross-project matters, and the API Working Group sure fits that description, as our goal is to help all OpenStack projects develop clean, consistent APIs. So as a core member of the API-WG, I was prepared to spend most of my time in these discussions. However, on Monday morning our room was fairly empty, although this was probably due to the fact that we weren’t scheduled a room until the night before, so not many people knew about it. So we all pecked at our laptops for an hour or so, and then I just figured we’d start. The topic was the changes to the API stability guidelines to define what the assert:supports-api-compatibility tag a project could aim for. I outlined the basic points, and Chris Dent filled in some more details. I was afraid that it might end up being Chris and I doing most of the talking, but people started adding their own points of view on the matter. Before long the room became more crowded; I think the lively discussion attracted people (well, that and the sign that Chris added in the hallway!).

The gist of the discussion was just how strict we needed to be about when changing some aspect of a public API required a version change. Most of the people in the room that morning were of the opinion that while removing an API or changing the behavior of a call would certainly require a change, non-destructive changes like adding a new API call, or adding an additional field to a response, should be fine without a version change, since they shouldn’t break anything. I tried to make the argument for interop API stability, but I was outnumbered 🙂 Fortunately, I ran into the biggest (and loudest! 🙂 proponent for that, Monty Taylor, at lunch, and convinced him to come to the afternoon session and make his point of view heard clearly. And he did exactly that! By the end of the afternoon, we were all in agreement that any change to any API call requires a version increase, and so we will update the guidelines to reflect that.

Tuesday was another cross-project day, with discussions on hierachical quotas taking up a lot of the morning, followed by a Nova-Neutron session and another session with the Cinder folks on multi-attach. What was consistent across these sessions was a genuine desire to get things working better, without any of the finger-pointing that could certainly arise when two teams get together to figure out why things aren’t as smooth as they should be.

Wednesday began the team-specific sessions. Nova was given a huge, cavernous ballroom. It had a really bad echo, as well as constant fan noise from the air system, and so for someone like me with hearing loss, it was nearly impossible to hear anything. Wish I had worked on my lip reading!

The cavernous ballroom as originally set up for the Nova team sessions.

We quickly decided to re-arrange the tables into a much more compact structure, which made it slightly better for discussions.

Moving the tables into a smaller rectangle made it a little easier to hear each other.

We had a full agenda, with topics such as cells V2, quotas, and the placement engine/API pretty much taking up Wednesday and Thursday. And like the cross-project days, it felt like we made solid progress. Anyone who had their doubts about this new format were convinced by now that the PTG was a big improvement! The discussions about Placement were especially helpful for me, because we went into the details of the complex nesting possibilities of NUMA cells and SR-IOV devices, and what the best way (if any) to effectively model them would be.

There was one dark spot on the event: my laptop died a horrible death! Thursday morning I opened the lid that I had closed a few hours earlier after an evening of email answering and Netflix watching, only to be greeted with this:

You do NOT want your laptop screen to look like this!

It had made a crackling sound as the screen displayed kernel panic output, so I unplugged the charger and closed the lid. After waiting several anxious minutes, I tried to turn the laptop on. Nothing. Dead. No response at all: no sound, no video… nothing. I tried again and again, using every magical keypress incantation I knew, and nothing. Time of death: 0730.

Sure, I still had my iPhone, but it’s really hard to do serious work that way. For one, etherpads simply don’t work in iOS browsers. It’s also very hard to see much of a conversation in an IRC client on such a small screen. All I could do was read email. So I spent the rest of the PTG feeling sorry for myself and my poor dead laptop. David Medberry lent me his keyboard-equipped Kindle for a while, and that was a bit better, but still, when you have a muscle-memory workflow, nothing will replace that.

The Foundation also arranged to have team photos taken during the PTG. You can see all the teams here, but I thought I’d include the Nova team photo here:

The Nova Team at the Pike PTG

Right after the last session on Thursday was a feedback session for the OpenStack Foundation to get the attendees’ impressions of what went well, what was terrible, what should they keep doing, what should the never ever do again, and everything in between. In general, most people liked the PTG format, and felt that it was a very productive week. There were many complaints about the hotel setup (room size, noisy AC, etc.), as well as disappointment in the variety of meals and lack of snacks, but lots of praise for the continuous coffee!!

Thursday night was the Nova team dinner. We went to Ted’s Montana Grill, where we were greeted by a somewhat threatening slogan:

Hmmm… are you threatening me???

The staff wasn’t threatening at all, and quickly found tables for all of us. On the way through the restaurant we passed several other tables of Stackers, so I guess that this was a popular choice. We had a wonderful dinner, and on the walk home, Chet Burgess, whose parents still live in the Atlanta area, suggested we stop at the Westin hotel for a quick drink. That sounded great to me, so four of us went into the hotel. I was surprised that Chet walked right past the bar, and went to the elevators. Turns out that there is a rotating bar up on the 73rd floor! Here is the group of us going up the elevator:

Top: John Garbutt, Tony Breeds. Bottom: Chet Burgess and Yours Truly

It was dark in the bar area, so I couldn’t get a nice photo, but here’s a stock photo to give you an idea of what the bar looked like:

The Sundial Bar at the Westin Hotel

Big thanks to Chet for organizing the dinner and suggesting having drinks up in the heights of Atlanta!

Friday was a much lower-key day. Gone were the gigantic ballrooms, and down to the lower level of the hotel for the final day. Many people had left already, as many teams did not schedule 3 full days of sessions. The Nova team used the first part of the day to go over the Ocata retrospective to talk about what went well, what didn’t go so well, and how we can improve as we start working on Pike. The main points were that while communication among the developers was better, it still needed to improve. We also agreed on the need for more visual documentation of the logic flows within the code. The specs only describe the surface of the design, and many people (like myself) are visual learners, so we’ll try to get something like that done for the Placement logic so that everyone can better understand where we are and where we need to go.

I had to leave around 4pm on Friday to catch my flight home, so I headed to the ATL airport. While walking through the terminal I saw a group of men standing in one of the hallways, and recognized that one of them was Rep. John Lewis, one of the leaders of the Civil Rights movement along with Dr. Martin Luther King, Jr., whose birthplace and historic site I visited earlier in the week. I shook his hand, and thanked him for everything that he has done for this country. Immediately afterwards I texted my wife to tell her about it, and she chastised me for not getting a photo! I explained that I was too nervous to impose on him. A little while later I walked over to another part of the airport where I knew there was a restroom, since I had to empty my water bottle before going through security. When I got there, I saw some of the same group of men I had seen with Rep. Lewis earlier, but he was no longer among them. Then I looked over by the entrance to the men’s room, and I saw Rep. Lewis posing for a selfie with the janitor! I figured he wouldn’t mind taking one with me, so when he came out I apologized for bothering him again, and asked if he would mind a photo. He smiled and said it was no problem, so…

Ran into one of the great American heroes, Representative John Lewis, in the Atlanta airport. He was gracious enough to let me take this photo.

I admit that I was too excited to hold the phone very still! So a blurry photo is still better than no photo at all, right? I’ve met several famous people in my lifetime, but never one who has done as much to make the world a better place. And looking back, it was a fitting end to a week that involved the coming together of people of different nationalities, races, religions to help build a free and open software.

Interop API Requirements

Lately the OpenStack Board of Directors and Technical Committee has placed a lot of emphasis on making OpenStack clouds from various providers “interoperable”. This is a very positive development, after years of different deployments adding various extensions and modifications to the upstream OpenStack code, which had made it hard to define just what it means to offer an “OpenStack Cloud”. So the Interop project (formerly known as DefCore) has been working for the past few years to create a series of objective tests that cloud deployers can run to verify that their cloud meets these interoperability standards.

As a member of the OpenStack API Working Group, though, I’ve had to think a lot about what interop means for an API. I’ll sum up my thoughts, and then try to explain why.

API Interoperability requires that all identical API calls return identical results when made to the same API version on all OpenStack clouds.

This may seem obvious enough, but it has implications that go beyond our current API guidelines. For example, we currently don’t recommend a version increase for changes that add things, such as an additional header or a new URL. After all, no one using the current version will be hurt by this, since they aren’t expecting those new things, and so their code cannot break. But this only considers the effect on a single cloud; when we factor in interoperability, things look very different.

Let’s consider the case where we have two OpenStack-based clouds, both running version 42 of an API. Cloud A is running the released version of the code, while Cloud B is tracking upstream master, which has recently added a new URL (which in the past we’ve said is OK). If we called that new URL on Cloud A, it will return a 404, since that URL had not been defined in the released version of the code. On Cloud B, however, since it is defined on the current code, it will return anything except a 404. So we have two clouds claiming to be running the same version of OpenStack, but making identical calls to them has very different results.

Note that when I say “identical” results, I mean structural things, such as response code, format of any body content, and response headers. I don’t mean that it will list the same resources, since it is expected that you can create different resources at will.

I’m sure this will be discussed further at next week’s PTG.

 

API Longevity

How long should an API, once released, be honored? This is a topic that comes up again and again in the OpenStack world, and there are strong opinions on both sides. On one hand are the absolutists, who insist that once a public API is released, it must be supported forever. There is never any justification for either changing or dropping that API. On the other hand, there are pragmatists, who think that APIs, like all software, should evolve over time, since the original code may be buggy, or the needs of its users have changed.

I’m not at either extreme. I think the best analogy is that I believe an API is like getting married: you put a lot of thought into it before you take the plunge. You promise to stick with it forever, even when it might be easier to give up and change things. When there are rough spots (and there will be), you work to smooth them out rather than bailing out.

But there comes a time when you have to face the reality that staying in the marriage isn’t really helping anyone, and that divorce is the only sane option. You don’t make that decision lightly. You understand that there will be some pain involved. But you also understand that a little short-term pain is necessary for long-term happiness.

And like a divorce, an API change requires extensive notification and documentation, so that everyone understands the change that is happening. Consumers of an API should never be taken by surprise, and should have as much advance notice as possible. When done with this in mind, an API divorce does not need to be a completely unpleasant experience for anyone.

 

OpenStack Focus

There has been a bit of concern expressed lately that OpenStack is somehow losing its focus, and is in danger of losing its momentum because of the effect of the Big Tent, and its  resulting growth in the number of projects that call themselves OpenStack. This growth has even been blamed for the recent layoffs of OpenStack developers. Some have called on the OpenStack leadership to dismantle the Big Tent approach, and only focus on a few core projects.

While it is true that the plethora of projects has diverted attention from the work needed in the heart of OpenStack (and I won’t go into how to draw the line separating the two here), I feel that the criticism is misplaced. It isn’t up to the governing bodies of OpenStack to enforce such a refocusing; rather, it is up to the contributors to make such decisions. That’s just the way that open source development works. It is silly to think that companies like HPE would take their marching orders from the OpenStack Foundation Board, or the OpenStack TC. The idea of the Big Tent, that all projects that are “one of us” shall have access to the same resources, is fine as it is.

The mistake that I believe many companies made is that they tried to focus on beefing up numbers that are irrelevant, such as lines of code, or the number of cores or PTLs they employed, as a way of demonstrating their commitment to OpenStack. They then would use those numbers for their sales teams as a selling point for their OpenStack-based offerings.

Open Source is a difficult sell for most companies; they certainly understand the benefit when they use it, but have a much harder time justifying the cost of paying their employees to work on something that is used by everyone, even their competitors. So they came up with ways of selling their particular spin on OpenStack, and used these contribution number to impress customers. So when that failed to generate the type of revenue that was expected, out came the axe.

I believe that many of these companies encouraged the development of these small peripheral projects because it would be easier for one of their employees to achieve core status, and possibly get elected PTL, which their marketing departments would use in an attempt to prove that company’s OpenStack-ness.

I don’t agree that there is anything that OpenStack itself needs to do. Rather, the companies who are contributing to OpenStack need to better understand the nature of open source development, and focus on those areas that will make OpenStack as a whole richer and more reliable, instead of gaming the system to make themselves look important. So please stop saying that this is the fault of the Big Tent.