Rigid Agility

The title of this post points out the absurdity of the approach to Agile software development in many organizations: they want to use a system designed to be flexible in order to quickly and easily adapt to change, but then impose this system in a completely inflexible way.

The pitfalls I discussed in my previous blog post are all valid. I’ve seen them happen many times, and they have had a negative impact on the team involved. But they were all able to be fixed by bringing the problem to light, and discussing it honestly. A good manager will make all the difference in these situations.

But the most common problem, and also the most severe, is that people simply do not understand that Agile is a philosophy, not a set of things that you do. I could go into detail, but it is expressed quite well in this blog post by Brian Knapp.

The key point in that post is “Agile is about contextual change”. When things are not right, you need to be able to change in response. Moreover, it is specifically about not having a set of rigid rules defining how you work. Unfortunately, too many managers treat Agile practices as if they were magical incantations: just say these words, and go through these motions, and voilà! Instant productivity! Instant happy developers! Instant happy clients!

Agile practices came about in response to previous ways of doing things that were seen as too rigid to be effective. The name “Agile” itself represents being able to change and adapt. So why do so many managers and companies fail to understand this?

In most cases, this misunderstanding is greatest when adopting these practices is mandated from the upper levels of management, instead of developing organically by the teams that use it. In many cases, some VP reads an article about how Agile improved some other company’s productivity, and decides that everyone in their company will do Agile, too! I mean, that’s what leadership is all about, right? So the lower-level managers get the word that they have to do this Agile thing. They read up on it, or they go to a seminar given by some highly-paid consultants, and they think that they know what they have to do. Policies and practices are set up, and everyone has to follow them. Oh, wait, you have some groups in the company who don’t work on the same thing? Too bad, because the CxO level has decreed that “everyone must do these same things in the same way”.

Can you see how this practice misses the whole point of being Agile? (and why I started this series with a blog post about Punk Rock?) A team needs to figure out what works for them and what doesn’t, and change so that they are doing more of the good stuff and less (or none) of the bad. And it doesn’t matter if other teams are running things differently; you should do what you need to be successful. In an environment of trust, this happens naturally.

Unfortunately, when Agile is imposed from the top down, trust is usually never considered as important, and certainly not the most important aspect of success. And when teams start to follow these Agile practices in this sort of environment, they may experience some improvement, but it certainly will not be anything like they had envisioned. Teams will be called “failures” because they didn’t “do agile right”. Managers then respond by reading up some more, or hiring “agile consultants“, in order to figure out what’s wrong. They may decide to change a thing or two, and while it may be slightly better, it still isn’t the nirvana that was promised, and it never will be.

Unfortunately, too many people who are reading this and nodding their heads in recognition are stuck in a rigid company that is afraid to trust its employees. All I can say to you is do what you can to make things better, even if things still fall short. And in the longer term, “contextual change” is probably a term you need to apply to your employment.

Agile Pitfalls

So your team is adopting Agile practices? Maybe even your whole company? Your managers and their managers are all talking about it, and the wonderful future it will bring, with gains in productivity and developer happiness. But while it can be exciting and productive when done correctly, unfortunately too many do not understand what it means to be Agile. They’ve probably read a few articles about it, and know words like Scrum and Kanban and Velocity, and now think they understand Agile. They set out to change their teams to be Agile, and the team begins to estimate user stories, do regular standups, and divide development tasks into 2-week sprints. They’re agile now, right?

Not even close.

Agile, first and foremost, is about trust. Trust in the developers on the team to create quality software. Trust in the product managers to state the business’s needs accurately. Trust in management that the additional transparancy required for agile development will not be used to criticize performance in the future. Trust that bringing a problem to the forefront will be appreciated instead of perceived as an attempt to blame. Trust that if events happen to change the situation, that the team will make the adjustments required. Trust that blame will never be the goal.

If any member of the team feels that their honesty and openness will be used against them in any way, they will respond by hiding things and disengaging from the team. Sure, they’ll still appear to be doing things the new way, but they won’t be forthcoming about problems they are encountering, and will instead paint a positive but unrealistic picture of their progress. Managers need to be aware of this, and back it up by rewarding openness.

So while it is true that adopting some or all of the practices that fall under the term “agile” can improve your team’s software development, there are plenty of pitfalls to watch out for. While some of these pitfalls are the result of not understanding how agile practices are supposed to work, most stem from mistrust and fear that the transparency will be used against them at some point in the future, such as performance reviews. I’ve listed several, in order from the least impactful to the most:

Note: I’m using the term “standup” to refer to the regular meeting of the team responsible for the work being done. Some places call it a “scrum”, but scrum is actually one particular set of agile practices, with a standup being one of them. I prefer to call it “standup” for two reasons: first, it emphasizes that everyone should be standing. That may sound silly, but it does wonders for keeping things brief and on-point. Second, if you’re familiar with the sport of rugby, a scrum is visually the opposite of how your team should look.

rugby scrum
Now *this* is a scrum!

Many of these pitfalls may seem trivial, but together they add up to reduce any gains you might expect to make by implementing Agile practices.

“Gaming” story point estimation

What I mean by “gaming” is inflating the number of points you estimate for your story so that it looks like you’re doing more (or more difficult) work than you actually are. This happens a lot in environments where team members feel that their performance evaluations will be based on their velocity. So when it comes time to estimate story points, these developers will consistently offer higher numbers, and argue for them by overstating the expected complexity. This behavior pre-dates Agile development; see this Dilbert cartoon from 1995 for a similar example. It’s human nature to want to look like you’ve accomplished a lot, especially when your potential raise and/or promotion is dependent on it. Again, managers can help reduce this by never using completed story points as a metric for evaluation, or even praising someone for completing a “tough” story. It’s a team effort, and praise should be for the entire team.

Using standup for planning

This is common in the early stages of development, when people are still figuring things out. During standup, someone mentions an issue they are having, and another person chimes in with some advice. A discussion then ensues about various approaches to solve the issue, with the pros and cons of each being argued back and forth. Before you know it, 20 minutes has gone by, and you’re still on the first person’s report.

If something comes up during standup that requires discussion longer than a minute or so, it should be tabled until after standup. Write it down somewhere so it isn’t forgotten, and then move on. After standup, anyone interested in discussing that issue can do so, and the rest can go back to what they were doing.

Treating story point estimates as real things

Who hasn’t reviewed a requirement, and thought “this isn’t so difficult”, only to find once you try to make that change, a lot of other things break? That’s just a reality when working with non-trivial systems. In an atmosphere of trust, that developer would share this news at the next standup (at the latest), so that everyone knows that the original estimate was wrong. But sometimes developers are made to feel that if it takes too long to finish a story that was estimated as fairly easy, that would be seen as failure, and be held against them. In such an environment, many developers might hide this information, and struggle with the problems by themselves, instead of feeling safe enough to share the difficulty with their team.

Not having a consistent definition of “done” for Kanban

We all know what the word “done” means, right? Well, it’s not so clear when it comes to software. Is it done when the unit tests pass? When the functional tests pass? When it is merged into the master branch? When it is released into production?
Each team should define what they mean by “done”, and apply that consistently. Otherwise, you’ll have stories that still need attention marked as “done”, and that will make any measure of velocity meaningless.

Using standup to account for your time

This is one of the most common pitfalls to teams new to scrum. During standup, you typically describe what you’ve been working on the previous day, what you plan to work on today, and (most importantly) if there is anything preventing you from being successful. This serves several purposes: to keep people from duplicating efforts by knowing what others are working on, and to catch those inevitable problems before they grow to become disasters.

But if your manager (or someone else with a higher-level role) is in the standup, many team members can feel pressure to list every single thing that they worked on, or meetings they went to, or side tasks that they helped out on… none of which has any bearing on the project at hand. Especially if they have been working on a single thing, while others in the team have been working on several smaller tasks. It just sounds like they are goofing off, or not being as productive as other members of the team. If you are the manager of a team and you notice team members doing this, it’s important to reassure them that you’re not tracking how they spend their time at standup. That would also be a good time to reinforce to the team what standup should be about. Again, if that trust is not there among the team members, standup will become a waste of people’s time.

Rigid adherence to practices

All of the above can and do contribute to failures in teams adopting agile practices. But this last one is far worse than all of the others combined. It’s bad enough to merit its own blog post.

Being Punk

I came of age in the mid-1970s, and at that time, punk rock was just starting, with bands like The Ramones in the US and The Damned in the UK. In those pre-Spotify days, most of the music you listened to was on the radio, and radio was dominated by record companies pushing their artists, and the big trends at the time were disco and arena rock. Unless you could get a college radio station, these over-produced songs were pretty much all that you could listen to.

Punk arose as a reaction to this stifling control of music. The original idea was DIY – do it yourself! Who cared if you couldn’t play guitar very well, or sing like an angel? Who cared if you didn’t have access to a studio with the latest recording equipment? It was the feeling and energy that mattered above all. Several punk bands started out very raw, but in time learned more about music, recording, and songwriting. They started experimenting with different styles in their songs, and some of the fans would have none of that. The most memorable example of that was when The Clash released their epic album London Calling: there were songs with horns, for crissake! This wasn’t punk! Punk can only have…

And this is where punks fell into their own trap. As a reaction to having to slavishly follow an established musical style, some were now insisting that their favorite bands adhere to this new musical style! They forgot the DIY part, and only thought about the fast, simple chord structures and relentless drumming. They wouldn’t allow these bands to grow and change.

Which brings me to my actual topic: Agile software development. I’ll have more to say in a follow-up post, but I’m sure most can already see the connection.

Fanatical Support

“Fanatical Support®” – that’s the slogan for my former employer, Rackspace. It meant that they would do whatever it took to make their customers successful. From their own website:

Fanatical Support® Happens Anytime, Anywhere, and Any Way Imaginable at Rackspace

It’s the no excuses, no exceptions, can-do way of thinking that Rackers (our employees) bring to work every day. Your complete satisfaction is our sole ambition. Anything less is unacceptable.

Sounds great, right? This sort of approach to customer service is something I have always believed in. And it was my philosophy when I ran my own companies, too. Conversely, nothing annoys me more than a company that won’t give good service to their customers. So when I joined Rackspace, I felt right at home.

Back in 2012 I was asked to create an SDK in Python for the Rackspace Cloud, which was based on OpenStack. This would allow our customers to more easily develop applications that used the cloud, as the SDK would handle the minutiae of dealing with the API, and allow developers to focus on the tasks they needed to carry out. This SDK, called pyrax, was very popular, and when I eventually left Rackspace in 2014, it was quite stable, with maybe a few outstanding small bugs.

Our team at Rackspace promoted pyrax, as well as our SDKs for other languages, as “officially supported” products. Prior to the development of official SDKs, some people within the company had developed some quick and dirty toolkits in their spare time that customers began using, only to find out some time later when they had an issue that the original developer had moved on, and no one knew how to correct problems. So we told developers to use these official SDKs, and they would always be supported.

However, a few years later there was a movement within the OpenStack community to build a brand-new SDK for Python, so being good community citizens, we planned on supporting that tool, and helping our customers transition from pyrax to the OpenStackSDK for Python. That was in January of 2014. Three and a half years later, this has still not been done. The OpenStackSDK has still not reached a 1.0 release, which in itself is not that big a deal to me. What is a big deal is that the promise for transitioning customers from pyrax to this new tool was never kept. A few years ago the maintainers began replying to issues and pull requests stating that pyrax was deprecated in favor of the OpenStackSDK, but no tools or documentation to help move to the new tool have been released.

What’s worse, is that Rackspace now actively refuses to make even the smallest of fixes to pyrax, even though they would require no significant developer time to verify. At this point, I take this personally. For years I went to conference after conference promoting this tool, and personally promising people that we would always support it. I fought internally at Rackspace to have upper management commit to supporting these tools with guaranteed headcount backing them before we would publish them as officially supported tools. And now I’m extremely sad to see Rackspace abandon these people who trusted my words.

So here’s what I will do: I have a fork of pyax on my GitHub account. While my current job doesn’t afford me the time to actively contribute much to pyrax, I will review and accept pull requests, and try to answer support questions.

Rackspace may have broken its promises and abandoned its customers, but I cannot do that. These may not be my customers, but they are my community.

Inevitable

It’s about the time of year that I should be posting something about my spring long-distance ride, as I did in 2015 and 2016. Unfortunately, though, I won’t be able to do the ride this year, and may not be doing distance rides for some time; perhaps never again.

Last October I rode in the 2016 Valero Ride to the River, which was a full century (100 miles) on Saturday, and then a “short” 38-mile ride on Sunday. I was surprised at how strong I felt that weekend – it seemed that all my training had paid off! However, I was even more surprised a few weeks later when I woke up and couldn’t bend my left thumb easily. When I did bend it, there was a very noticeable clicking I felt in the muscles and/or tendons. A quick search on Google showed that I had a case of “Trigger Thumb” (also called “Trigger Finger”). I saw my orthopaedist about it, as well as a hand specialist, and after several treatments (and several months of exercise and naproxen) the clicking finally subsided, and I regained normal use of my thumb. But unfortunately, the consensus of the doctors was that bike riding was the culprit, since it involves a pressure on that part of my hand in the normal riding position, and is especially bad on rough roads that I frequently had to deal with. Yeah, I know all about recumbent bikes, but that just seems too sad to contemplate now. And hey, if anyone knows a good metal worker who might be able to build a custom set of handlebars, I have some ideas on how to change them to reduce the pressure on the hands.

So why did I title this post “Inevitable”? Because of the underlying cause of the condition: osteoarthritis. I had been first diagnosed with arthritis about 20 years ago, and with each subsequent joint problem, my doctors have pretty much told me to resign myself to it, as I seem to have a natural tendency for my joints to get arthritic. When I was examined for my trigger thumb, the x-rays showed a great deal of underlying arthritis in my thumb joint, which probably made the irritation to the tendons worse. And this wasn’t the first time I got such a diagnosis.

Back in my 20s and 30s, I played a lot of tennis. I was pretty good, too, playing at a solid 4.5 rating. However, I noticed soreness in my shoulder when serving or hitting overheads. As many of my fellow tennis players had rotator cuff injuries that seemed to match my symptoms, I assumed that it was something fixable. But after visiting the doctor, he said my rotator cuff was just fine; rather, I had arthritis in my shoulder. There was no treatment except anti-inflammatory drugs and rest (i.e., not playing tennis). I tried to adjust, but even after some time off it didn’t get better. So I gave up tennis.

Right about that time, my sons had progressed in the soccer world so that they were much better than my ability to coach them. I loved being in the game, so I got my referee badge and did several games a week to keep in shape. I figured that if I could keep up on the pitch with 17-year-olds, I wasn’t too far over the hill!

I had had surgery back in 1992 to repair a torn meniscus in my knee (tennis injury, of course!), but after a couple of years of reffing, I had to have 2 more surgeries, one on each knee. When my left knee started bothering me a couple of years ago, I assumed that I needed yet another surgery to clean it up, but my doctor said that there was no more cartilage left there, and that the pain I was feeling was arthritis. Again, I could take anti-inflammatories to relieve some of the pain, but there was nothing I could do to “fix” this. He advised reducing impact to my knees, so that’s when I started cycling seriously. Now that I have to greatly reduce my cycling, I don’t have a lot of options left. I go to the gym and use the elliptical machine, but that’s not even close to doing some real activity. I did go hiking in Big Bend National Park last weekend, and was pleasantly surprised that my knees and hip didn’t complain very much. Oh, didn’t I mention that I also have arthritis in my hip?

So I don’t know where I’ll turn next. I do know that sitting on my butt is not an option. Maybe knee replacement? And/or hip replacement? Geez, I’m a few months shy of 60, and that seems awfully young to be trading out body parts. So I guess it’s off to work on modified handlebar designs for my bike…

Claims in the Scheduler

One of the shortcomings of the current scheduler in OpenStack Nova is that there is a long interval from when the scheduler selects a suitable host for a new instance until the resources on that host are claimed so that they are no longer available. Now that resources are tracked in the Placement service, we want to move the claim closer to the time of host selection, in order to avoid (or eliminate) the race condition. I’m not going to explain the race condition here; if you’re reading this, I’m assuming this is well understood, so let me just summarize my concern: the current proposed design, as seen in the series starting with https://review.openstack.org/#/c/465175/, could be made much better with some design changes.

At the recent Boston Summit, which I was unable to attend due to lack of funding by my employer, the design for this change was discussed, and the consensus was to have the scheduler return a list of hosts for each instance to the super conductor, and then have the super conductor attempt to claim the resources for the first host returned. If the allocation fails, the super conductor discards that host and tries to claim the resources on the second host. When it finally succeeds in a claim, it sends a message to that host to start building the instance, and that message will include the list of alternative hosts. If something happens that causes the build to fail, the compute node sends it back to its local conductor, which will unclaim the resources, and then try each of the alternates in order by first claiming the resources on that host, and if successful, sending the build request to that host. Only if all of the alternates fail will the request fail.

I believe that while this is an improvement, it could be better. I’d like to do two things differently:

  1. Have the scheduler claim the resources on the first selected host. If it fails, discard it and try the next. When it succeeds, find other hosts in the list of weighed hosts that are in the same cell as the selected host in order to provide the number of alternates, and return that list.
  2. Have the process asking the scheduler to select a host also provide the number of alternates, instead of having the scheduler use the current max_attempts config option value.

On the first point: the scheduler already has a representation of the resources that need to be claimed. If the super conductor does the claiming, it will have to re-generate that representation. Sure, that’s not all that demanding, but it sure makes for cleaner design to not repeat things. It also ensures that the super conductor gets a good host from the start. Let me give an example. If the scheduler returns a chosen host (without claiming) and two alternates (which is the standard behavior using the config option default), the conductor has no guarantee of getting a good host. In the event of a race, the first host may fail to allocate resources, and now there are only the two alternates to try. If the claim was done in the scheduler, though, when that first host failed it would have been discarded, and the the next host tried, until the allocation succeeded. Only then would the alternates be determined, and the super conductor could confidently pass on that build request to the chosen host. Simply put: by having the scheduler do the initial claim, the super conductor is guaranteed to get a good host.

Another problem, although much less critical, is that the scheduler still has the host do consume_from_request(). With the claim done in the conductor, there is no way to keep this working if the initial host fails. We will have consumed on that host, even though we aren’t building on it, and have not consumed on the host we actually select.

On the second point: we have spent a lot of time over the past few years trying to clean up the interface between Nova and the scheduler, and have made a great deal of progress on that front. Now I know that the dream of an independent scheduler is still just that: a dream. But I also know that the scheduler code has been greatly improved by defining a cleaner interface between it an Nova. One of the items that has been discussed is that the config option max_attempts doesn’t belong in the scheduler; instead, it really belongs in the conductor, and now that the conductor will be getting a list of hosts from the scheduler, the scheduler is out of the picture when it comes to retrying a failed build. The current proposal to not only leave that config option in the scheduler, but to make it dependent on it for its functioning, is something that once again makes the scheduler Nova-centric (and Nova-exclusive). It would be a much cleaner design to simply have the conductor ask for the number of hosts (chosen + alternates), and have the scheduler’s behavior use that number. Yes, it requires a change to the RPC interface, but that is to be expected if you are changing a fundamental behavior of the scheduler. And if the scheduler is ever moved into a module, all it is is another parameter. Really, that’s not a good reason to follow a poor design.

Since some of the principal people involved in this discussion are not available now, and I’m going to be away at PyCon for the next few days, Dan Smith suggested that I post a summary of my concerns so that all can read it and have an idea what the issues are. Then next week sometime when we are all around and have the time to discuss this, we can hash it out on #openstack-nova, or maybe in a hangout. I also have pushed a series that has all of the steps needed to make this happen, since it’s one thing to talk about a design, and it’s another to see the actual code. The series starts here: https://review.openstack.org/#/c/464086/. For some of the later patches I haven’t finished updating the tests to match the change in method signatures and returned value structures, but you should be able to get a good idea of the code changes I’m proposing.

Hertz and the Great Tollway Ripoff

Last Thanksgiving we went on a wonderful holiday in the Florida Keys. We flew to Miami, picked up a rental car from Hertz, and drove away. There are a couple of toll roads along the way, but we had Hertz’s PlatePass, which would work on those toll sensors. I’ve used similar things with other rental companies, where a few weeks later I receive a bill for the accumulated tolls.

Not with PlatePass, however. Not only did I get a bill for the tolls (about $11), but a $25 service charge on top of that! Turns out that Hertz charges $5 a day (up to $25), even on days when you don’t run up any tolls! I’m stuck paying this, but you can be sure that I will avoid using and recommending Hertz in the future. Yeah, I found out that it’s documented on the website, but it wasn’t documented when I got in the car, nor was I given an option to disable it. So yeah, Hertz and/or PlatePass made an extra $25 off of me on that trip, but they will lose so much more than that in the future. This is what happens when businesses are short-sighted and go after the quick buck instead of developing long-term relationships with their customers.

API Stability Thoughts

Recently in the OpenStack API Working Group we have been spending a lot of time and energy on establishing the API Stability guidelines that will serve as the basis for the supports-api-stability tag proposed by the OpenStack Technical Committee. Tags are a way for consumers of OpenStack to get a better idea as to the state of the various projects, and this particular tag is intended to reassure consumers that the API for a project with this tag would not change in a breaking way. The problem with that is defining what exactly constitutes a “breaking change”.

While there are about as many opinions as there are participants in the discussion, they all roughly fall into one of two camps:

  1. A change that simply adds to the existing API, such as returning additional values in addition to the current ones, isn’t breaking stability, as existing clients will still receive all the information they expect, and will ignore the additional stuff.
  2. Any cloud that says it is running a particular version of an API should return the exact same information. In other words, a client written for Cloud A will work without modification with Cloud B. If something changes that would make these responses different, that change must be reflected in a new version, and the old version should remain available for a “long time” (precisely how long a “long time” is is a completely separate discussion in itself!).

I wrote about the second point above in an earlier post, which attempted to summarize that position after some discussion with many in the community who were pushing cloud interoperability (or “interop”). And at the recent Atlanta PTG (which I recapped here), we discussed this issue at length. The problem was that those who fell into Camp #1 above were at the morning session, while Camp #2 was there in the afternoon. So while the discussions were fruitful, they were not decisive. The discussions and comments on the Gerrit review for the proposed change to the API Stability Guidelines since the PTG reflect this division of opinion and lack of resolution.

But today during discussions in the API-WG meeting on IRC, it dawned on me that there is a fundamental reason we can’t reconcile these two points of view: we’re talking about 2 different goals. Camp #1 is concerned with not breaking clients whose applications rely on an OpenStack service’s API, while Camp #2 is concerned with not having different cloud deployments vary from each other.

The latter goal, while admirable, is very difficult to achieve in practice for anything but the most basic stuff. For one thing, any service that uses extensions will almost certainly fail, because there is no way to guarantee that deployments will always install and run the same extensions – that’s sort of the point of extensibility, after all. And during the discussions at the PTG, we tried to identify versioning systems that could meet the interop requirements, and the only one anyone could describe was microversions. So that means to satisfy Camp #2, a service would have to use microversions, period.

So I propose a slightly different route forward: let’s define 2 tags to reflect these two different types of “stability”. Let’s use the original tag “assert:supports-api-compatibility” to mean the Camp #2 standard, as its emphasis is interoperability. Then add a separate “assert:supports-api-stability”, which reflects the Camp #1 understanding of never breaking clients.

It is important to note that this second tag is not meant to indicate a “light” version of the first, just because the requirements wouldn’t be as difficult to attain. It reflects support for a different, but still important, continuity for their users. Each project can decide which of these goals are relevant to it, and will make their APIs better by achieving either (or both!) goals.

Atlanta PTG Reflections

Last week was the first-ever OpenStack PTG (Project Teams Gathering), held in Atlanta, Georgia. Let’s start with the obvious: the name is terrible, which made it very hard to explain to people (read: management at your job) what it was supposed to be, and why it was important. “The Summit” and “The Midcycle” were both much better in that regard. Yes, there was plenty of material available on the website, but a catchier name would have helped.

But with that said, it was probably one of the most productive weeks I’ve had as a OpenStack developer. In previous gatherings there were always things that were in the way. The Summits were too “noisy”, with all the distractions of keynotes, marketplace, presentations, and business /marketing people all over the place. The midcycles were much more focused on developer issues, but since they were usually single-team events, that meant very little cross-project interaction. The PTG represented the best of both without their downsides. While I always enjoyed Summits, there was a bunch of stuff always going on that distracted from being able to focus on our work.

The first two days were devoted to cross-project matters, and the API Working Group sure fits that description, as our goal is to help all OpenStack projects develop clean, consistent APIs. So as a core member of the API-WG, I was prepared to spend most of my time in these discussions. However, on Monday morning our room was fairly empty, although this was probably due to the fact that we weren’t scheduled a room until the night before, so not many people knew about it. So we all pecked at our laptops for an hour or so, and then I just figured we’d start. The topic was the changes to the API stability guidelines to define what the assert:supports-api-compatibility tag a project could aim for. I outlined the basic points, and Chris Dent filled in some more details. I was afraid that it might end up being Chris and I doing most of the talking, but people started adding their own points of view on the matter. Before long the room became more crowded; I think the lively discussion attracted people (well, that and the sign that Chris added in the hallway!).

The gist of the discussion was just how strict we needed to be about when changing some aspect of a public API required a version change. Most of the people in the room that morning were of the opinion that while removing an API or changing the behavior of a call would certainly require a change, non-destructive changes like adding a new API call, or adding an additional field to a response, should be fine without a version change, since they shouldn’t break anything. I tried to make the argument for interop API stability, but I was outnumbered 🙂 Fortunately, I ran into the biggest (and loudest! 🙂 proponent for that, Monty Taylor, at lunch, and convinced him to come to the afternoon session and make his point of view heard clearly. And he did exactly that! By the end of the afternoon, we were all in agreement that any change to any API call requires a version increase, and so we will update the guidelines to reflect that.

Tuesday was another cross-project day, with discussions on hierachical quotas taking up a lot of the morning, followed by a Nova-Neutron session and another session with the Cinder folks on multi-attach. What was consistent across these sessions was a genuine desire to get things working better, without any of the finger-pointing that could certainly arise when two teams get together to figure out why things aren’t as smooth as they should be.

Wednesday began the team-specific sessions. Nova was given a huge, cavernous ballroom. It had a really bad echo, as well as constant fan noise from the air system, and so for someone like me with hearing loss, it was nearly impossible to hear anything. Wish I had worked on my lip reading!

The cavernous ballroom as originally set up for the Nova team sessions.

We quickly decided to re-arrange the tables into a much more compact structure, which made it slightly better for discussions.

Moving the tables into a smaller rectangle made it a little easier to hear each other.

We had a full agenda, with topics such as cells V2, quotas, and the placement engine/API pretty much taking up Wednesday and Thursday. And like the cross-project days, it felt like we made solid progress. Anyone who had their doubts about this new format were convinced by now that the PTG was a big improvement! The discussions about Placement were especially helpful for me, because we went into the details of the complex nesting possibilities of NUMA cells and SR-IOV devices, and what the best way (if any) to effectively model them would be.

There was one dark spot on the event: my laptop died a horrible death! Thursday morning I opened the lid that I had closed a few hours earlier after an evening of email answering and Netflix watching, only to be greeted with this:

You do NOT want your laptop screen to look like this!

It had made a crackling sound as the screen displayed kernel panic output, so I unplugged the charger and closed the lid. After waiting several anxious minutes, I tried to turn the laptop on. Nothing. Dead. No response at all: no sound, no video… nothing. I tried again and again, using every magical keypress incantation I knew, and nothing. Time of death: 0730.

Sure, I still had my iPhone, but it’s really hard to do serious work that way. For one, etherpads simply don’t work in iOS browsers. It’s also very hard to see much of a conversation in an IRC client on such a small screen. All I could do was read email. So I spent the rest of the PTG feeling sorry for myself and my poor dead laptop. David Medberry lent me his keyboard-equipped Kindle for a while, and that was a bit better, but still, when you have a muscle-memory workflow, nothing will replace that.

The Foundation also arranged to have team photos taken during the PTG. You can see all the teams here, but I thought I’d include the Nova team photo here:

The Nova Team at the Pike PTG

Right after the last session on Thursday was a feedback session for the OpenStack Foundation to get the attendees’ impressions of what went well, what was terrible, what should they keep doing, what should the never ever do again, and everything in between. In general, most people liked the PTG format, and felt that it was a very productive week. There were many complaints about the hotel setup (room size, noisy AC, etc.), as well as disappointment in the variety of meals and lack of snacks, but lots of praise for the continuous coffee!!

Thursday night was the Nova team dinner. We went to Ted’s Montana Grill, where we were greeted by a somewhat threatening slogan:

Hmmm… are you threatening me???

The staff wasn’t threatening at all, and quickly found tables for all of us. On the way through the restaurant we passed several other tables of Stackers, so I guess that this was a popular choice. We had a wonderful dinner, and on the walk home, Chet Burgess, whose parents still live in the Atlanta area, suggested we stop at the Westin hotel for a quick drink. That sounded great to me, so four of us went into the hotel. I was surprised that Chet walked right past the bar, and went to the elevators. Turns out that there is a rotating bar up on the 73rd floor! Here is the group of us going up the elevator:

Top: John Garbutt, Tony Breeds. Bottom: Chet Burgess and Yours Truly

It was dark in the bar area, so I couldn’t get a nice photo, but here’s a stock photo to give you an idea of what the bar looked like:

The Sundial Bar at the Westin Hotel

Big thanks to Chet for organizing the dinner and suggesting having drinks up in the heights of Atlanta!

Friday was a much lower-key day. Gone were the gigantic ballrooms, and down to the lower level of the hotel for the final day. Many people had left already, as many teams did not schedule 3 full days of sessions. The Nova team used the first part of the day to go over the Ocata retrospective to talk about what went well, what didn’t go so well, and how we can improve as we start working on Pike. The main points were that while communication among the developers was better, it still needed to improve. We also agreed on the need for more visual documentation of the logic flows within the code. The specs only describe the surface of the design, and many people (like myself) are visual learners, so we’ll try to get something like that done for the Placement logic so that everyone can better understand where we are and where we need to go.

I had to leave around 4pm on Friday to catch my flight home, so I headed to the ATL airport. While walking through the terminal I saw a group of men standing in one of the hallways, and recognized that one of them was Rep. John Lewis, one of the leaders of the Civil Rights movement along with Dr. Martin Luther King, Jr., whose birthplace and historic site I visited earlier in the week. I shook his hand, and thanked him for everything that he has done for this country. Immediately afterwards I texted my wife to tell her about it, and she chastised me for not getting a photo! I explained that I was too nervous to impose on him. A little while later I walked over to another part of the airport where I knew there was a restroom, since I had to empty my water bottle before going through security. When I got there, I saw some of the same group of men I had seen with Rep. Lewis earlier, but he was no longer among them. Then I looked over by the entrance to the men’s room, and I saw Rep. Lewis posing for a selfie with the janitor! I figured he wouldn’t mind taking one with me, so when he came out I apologized for bothering him again, and asked if he would mind a photo. He smiled and said it was no problem, so…

Ran into one of the great American heroes, Representative John Lewis, in the Atlanta airport. He was gracious enough to let me take this photo.

I admit that I was too excited to hold the phone very still! So a blurry photo is still better than no photo at all, right? I’ve met several famous people in my lifetime, but never one who has done as much to make the world a better place. And looking back, it was a fitting end to a week that involved the coming together of people of different nationalities, races, religions to help build a free and open software.

Interop API Requirements

Lately the OpenStack Board of Directors and Technical Committee has placed a lot of emphasis on making OpenStack clouds from various providers “interoperable”. This is a very positive development, after years of different deployments adding various extensions and modifications to the upstream OpenStack code, which had made it hard to define just what it means to offer an “OpenStack Cloud”. So the Interop project (formerly known as DefCore) has been working for the past few years to create a series of objective tests that cloud deployers can run to verify that their cloud meets these interoperability standards.

As a member of the OpenStack API Working Group, though, I’ve had to think a lot about what interop means for an API. I’ll sum up my thoughts, and then try to explain why.

API Interoperability requires that all identical API calls return identical results when made to the same API version on all OpenStack clouds.

This may seem obvious enough, but it has implications that go beyond our current API guidelines. For example, we currently don’t recommend a version increase for changes that add things, such as an additional header or a new URL. After all, no one using the current version will be hurt by this, since they aren’t expecting those new things, and so their code cannot break. But this only considers the effect on a single cloud; when we factor in interoperability, things look very different.

Let’s consider the case where we have two OpenStack-based clouds, both running version 42 of an API. Cloud A is running the released version of the code, while Cloud B is tracking upstream master, which has recently added a new URL (which in the past we’ve said is OK). If we called that new URL on Cloud A, it will return a 404, since that URL had not been defined in the released version of the code. On Cloud B, however, since it is defined on the current code, it will return anything except a 404. So we have two clouds claiming to be running the same version of OpenStack, but making identical calls to them has very different results.

Note that when I say “identical” results, I mean structural things, such as response code, format of any body content, and response headers. I don’t mean that it will list the same resources, since it is expected that you can create different resources at will.

I’m sure this will be discussed further at next week’s PTG.