OpenStack PTG, Denver 2019

PTG Denver 2019 Logo

Immediately following the Open Infrastructure Summit in Denver was the 3-day Project Teams Gathering (PTG). This was the first time that these two events were scheduled back-to-back. It was in response to some members of the community complaining that traveling to 4 separate events a year (2 Summits, 2 PTGs) was both too expensive and too tiring. The idea was that now you would only have to travel twice a year.

Now that I’ve experienced these back-to-back events, I think that this was a giant step backwards. Let me explain why.

First, it was exhausting! Being in rooms with lots of people for days on end is very draining for those of us who are introverts. Sure, we can be outgoing and interact with people, but it takes a toll, and downtime is necessary to recharge the psychological batteries. At several points I found myself faced with attending a session or finding an empty room to work on stuff by myself, and the latter often won out.

Second, the main idea of the PTG was to take the midcycle get-togethers that many teams had been doing, and formalize a single place for them to meet. The feeling was that having these teams in the same place would spur cross-project discussions, and that definitely was the case. But now that teams will only be getting together every 6 months, we’re back to the situation we were in before the PTGs were created: many teams will need a mid-cycle meeting to ensure that everyone is on-track to complete the goals for that release cycle.

Third, being away from home for an entire week is too long. OK, maybe I’m just getting old, but I really do like being home. One of the nice things about traveling to conferences is tacking on a few extra days to explore the area. For example, after last year’s PTG in Denver, my wife flew out to join me, and we spent a long weekend in Rocky Mountain National Park and other nearby natural areas. But after a solid week of stuff, I couldn’t wait to go home.

Fourth, many people time their return travel so that they miss the last day (or part of it). My unscientific observation was that attendance on the last day of this PTG showed a more dramatic drop than in previous PTGs. I think that’s because it doesn’t seem as severe to miss one day out of 6 than to miss one day out of 3.

As is the tradition at PTGs, there was a feedback session at lunch on the second day, and a lot of the feedback was in line with my observations. Of course, there were a lot of people who liked the format, and for the exact opposite reasons! Goes to show you can’t please everyone.

As for the sessions, the API-SIG was scheduled in a room for Thursday morning. I hung out there, and a few people did come in, but I think we had covered all of the outstanding issues at the BoF session on Tuesday. So I got to spend a lot of the morning hacking on Neo4j, and was able to implement a lot of the functionality that is missing in Placement: nested providers, shared providers, and quotas. I put together a series of Jupyter Notebooks that demonstrated all of these things working with just a small amount of code so that I could share with other people involved in Placement.

And then there was lunch! After 3 days of either going hungry or grabbing something nearby, it was so much nicer to sit down with people while eating lunch. Unfortunately, the box lunches provided seemed to have been kept at near-freezing temperatures until just before the lunch break, and almost too cold to eat. Still, I much preferred them to not having any lunch session at all, if for nothing else than being able to share a meal with other OpenStackers.

In the afternoon we had the Nova – Placement cross-project session, to which the Placement PTL, Chris Dent, brought some bottles of bubbly to celebrate the deletion of the Placement code base from Nova. That commit ended up getting delayed for one more day, but still, it was a milestone to celebrate.

The rest of the session was personally painful to sit through, as the topics revolved around the things that we have been fighting to implement for over 2.5 years: nested providers, shared providers, tree affinity, and other complex relationships among resources. It was painful because I just wanted to shout out “WE’RE USING THE WRONG TOOL!”, as these things naturally flowed from a graph database. I was able to get all of these things working in my spare time over the previous few days. I like to think that I’m a pretty smart guy, but I’m not THAT smart. It’s just because the tool fits the problem domain.

Nested Provider demo
Jupyter notebook showing a section of the Nested Provider demo. It’s a little hard to see, but the two results show that there two possible solutions, both starting with the ComputeNode named ‘balanced_testnode’. Each solution shows that the requested resources both came from the same NUMA node. This is one of the things that comes naturally with a graph DB that is really, really hard in a SQL DB.

I spent that evening working to finish up my Neo4j examples, as I had asked several key placement contributors to take a few minutes to sit down with me so that I could show them what I had done. On Friday morning I showed my graph work to several people, and while each reaction was different, there was a definite flow from skepticism to curiosity and then (for some) to agreement. One of the people to whom I especially wanted to show this was Jay Pipes, whom I had mentioned in my earlier experiments with graph DBs. He had already seen the potential after those blog posts, but he was concerned with developers having to learn some new, cryptic language in order to implement this. However, after about 10 minutes of my demos, I showed him the query I was currently working on that wasn’t quite right. He looked it over, made a suggestion, and when I ran it, it worked correctly! So I think that if he could get a working knowledge after just 10 minutes of seeing the Cypher Query Language, it won’t be hard for other devs to pick it up.

Later in the day we had a good discussion with the Ironic team about a need that they had for stand-alone (i.e, not running under Nova). In such situations, they wanted to use the full resource amounts in placement, as opposed to the current approach used in Nova, which is to represent an Ironic node as an inventory of 1 thing. The issue with representing a baremetal server as, say, 500GB of disk and 16 CPUs is that it may occasionally be selected from a request for 250GB and 8CPU. Since each server cannot be shared, we needed to figure out a way to fully consume the resources on the machine when it was selected, even if the request was for a lower amount. Several ideas were floated and discussed, all with varying degrees of messiness. We finally settled on adding a new API endpoint that would accept a Resource Provider, and allocate all of its resources so that it would no longer be available to any other request.

Hallway Sign

On Saturday morning we started with the Cyborg-Nova cross-project session, at which we could finally see a demonstration of Cyborg in action! I had thought that the Summit sessions would have been much more useful if the demo had been shown then, so that we could have something concrete to discuss. I was glad to see that Cyborg is working and handling accelerators after a few years of planning and design, and I look forward to making further progress integrating it with Nova and Placement.

There were a few discussions in the afternoon that had to do with representing nested resources and their relationships. Once again, it was difficult to listen to these attempts to represent complex relationships in a SQL DB, when I had just demonstrated how simple it was in a graph DB. It was indeed telling that the subject was entitled “Implementing Nested Magic” – getting this working in SQL does seem to require supernatural powers!

I had to leave around 3pm to get to the airport, so I missed anything after that. But most people seemed to have left by then anyway. It had been a long week, and I was burnt out. I also missed being home with my wife, sleeping in my own bed, working at my own desk, and eating my own food. I sincerely hope that the Foundation reconsiders this back-to-back setup. I realize that they are trying to save money wherever possible, but this just wasn’t worth it.

Open Infrastructure Summit, Denver 2019

The first ever Open Infrastructure Summit was held in the last week of April 2019 at the Colorado Convention Center in Denver, CO. It’s the first since the re-branding from OpenStack to Open Infrastructure began last year to be officially held with the new name. Otherwise, it felt just like the OpenStack summits of old.

The keynotes were better than in prior summits – I think the sponsors got the feedback that no one was interested in sitting through a recap of “how they did X with OpenStack”, and instead focused more on what they intended to do with it. There was a great demo by Chris Hoge and Julia Kreger that showed a kubernetes operator managing a bare metal infrastructure; it showed very clearly that the typical media message around “Kubernetes is replacing OpenStack” is silly. They exist in different problem spaces, and work well together. The only place Kubernetes is replacing OpenStack is in the hype cycle.

After the keynotes I went to the Nova Project Update session. It was very thorough, but felt more like someone reading release notes out loud. I had hoped for more of a discussion about the thinking that went into some of the things that were worked on or are being planned rather than just a straight recitation.

After that was lunch – sort of. For the first time since these summits began, lunch was not provided. Instead, you were supposed to go to one of the many restaurants in the area and buy your own lunch. However, since we had pretty poor weather—freezing temperatures, snow, and rain—walking around downtown Denver wasn’t what I felt like doing. Judging by how packed the restaurant in the hotel across the street was, a lot of other people felt the same way. I understand that times are not as heady as in previous years when OpenStack was the latest hotness, but this seemed like a poor place to cut back. I always enjoyed sharing a table with a bunch of other OpenStackers and learning about where they were from and what they were doing with OpenStack. Going out to lunch meant that people tended to stay with groups they already knew. The afternoon snacks were also gone, which is no big deal for me, but others mentioned to me that they missed having them. Finally, they didn’t have a signature piece of conference swag. I’m typing this wearing the OpenStack hoodie I got back in the Paris 2014 summit, and have my sweatshirt from Tokyo 2015 in my room. Well, OK, they did give out a pair of socks, but they weren’t tied to the event. It’s not a huge thing, but not having something this time really makes things feel… different. And not in a good way.

There weren’t any sessions in the afternoon that I really wanted to go to, so instead I worked on two OpenStack-related projects: etcd-compute and using Graph Databases, such as Neo4j, to hold information for the Placement service. I have previously written about my work with both of these. And since the author of etcd-compute, Chris Dent, was also here at the summit, it was a perfect time to work on it together, so I set up several VMs for us to “play with”.

Monday evening after the sessions was the “Marketplace Mixer”, which is a way to get the attendees to visit the vendor area. They provided food and beverages, and I had my badge scanned several times in exchange for some local craft beer. There wasn’t a lot offered by the vendors that would be useful to me, but I did run into a lot of people I knew. When you’re in your 10th year of working on OpenStack, you get to know quite a few people!

On Tuesday I started with a session on Nova-Cyborg integration. Or at least that was what it was advertised as. It turned out to be more of an “Introduction to Cyborg Concepts” talk, rather than focusing on where the two projects needed to integrate.

cyborg-nova
The crowd at the Cyborg-Nova integration session

Later on was the API-SIG BoF (Birds of a Feather) session that I headed up. There hadn’t been much traffic in the SIG ahead of the summit, so I was happily surprised when several people showed up. We ended up having a good discussion on a variety of API-related topics, and I got to meet several of the people who have joined in some of the more recent IRC discussions and Office Hours who previously I had only known by their IRC handles. It’s always nice to put a face to a name.

In the afternoon was a session to update everyone on the process of extracting Placement from Nova. In the past this has been a somewhat heated topic, but this time everyone seemed to understand where things were and were pretty cool with it. There weren’t any long discussions, so the session finished early. I guess that’s a very good sign that we handed that process well.

The final session of the afternoon was to discuss what the various SIGs (Special Interest Groups) and WGs (Working Groups) needed to be successful. Since the API-SIG has been around for many years, we didn’t really have any needs along these lines. Sure, it would be great to get more people involved, but it isn’t critical. Some of the newer groups explored ways of getting the word out about their existence, which is always a problem. There is so much going on in the OpenStack world that getting people to pay attention to yet another thing is always challenging.

That evening was the Open Infrastructure party, sponsored by Trilio, Mirantis, Red Hat, Open Telekom Cloud, & AVI Networks. It was held in The Church Nightclub, which is an old church that has been converted to a nightclub. There was an open bar and food available, and they had a band playing for entertainment. The location was fun, but being indoors with loud music meant that there was only so much conversation you could have. Still, it was fun!

Open Infrastructure Party
The crowd at the Open Infrastructure Party at the Church Niteclub
church niteclub
A view from higher up that shows how an old church was converted into a niteclub. You can see the some of the band playing at the very bottom.

There weren’t any talks on Wednesday morning that I really wanted to attend, so I spent most of the morning in the designated hacking room working on the etcd-compute project for a while, and then on implementing many of the features that are currently lacking in Placement in my graph database code. I managed to implement passing a tree structure to represent nested resource providers so that it creates the corresponding nodes and relationships in the database. This implementation is becoming more and more complete, and I hope when I show it to others this week that they are able to get out of their MySQL comfort zone and see how much better this approach is for representing resources.

I went to lunch with some of the members of my team at IBM who were at the Summit, along with some people from Red Hat with whom we are working to ensure that their various offerings run as well on Power hardware as on x86. So while the pizza was tasty, it was definitely a working lunch. It was also great to meet some of the people I had only known online before.

The Red Hat – IBM lunch *after* the food had been eaten.

After lunch was a session focused on the gaps between Nova functionality and what has been implemented in OpenStack Client. Most of the missing functionality is concerned with supporting new microversions, and this support is several years behind. I’m not sure how effective the discussions were, since what is really needed is for people to take ownership of some of the needed tasks, and I didn’t hear a lot of that happening.

After that I went to the Cyborg Project Update. Once again, it probably would have been much more useful to anyone who hadn’t been following along with the project, so while I didn’t get much from it, there was a lot of information presented on the current state and future plans for Cyborg.

And that was it! The end of another Summit, even if it was the first. That evening I met my sister for dinner. She lives in the Denver area, and it was great to catch up with her and spend some time relaxing after 3 long days. But the relaxation will be short-lived, as the Train PTG starts first thing tomorrow morning!

Geri & Ed
Selfie with my big sister Geri

More fun with etcd-compute

Last time I ended my work getting etcd-compute running at the point where I needed to configure the virtual networking. I’ve been busy the past few days with meetings and other work-related stuff, so it’s taken me a while to continue on this experiment. But I have some time now; let’s jump back in!

The reason I thought that I needed to set up virtual networking was that when I ran ip a on my controller node, all I had was the loopback and main ethernet interfaces. The directions for etcd-compute talked about setting up the metadata server by adding the IP address it uses to a virtual bridge: sudo ip addr add 169.254.169.254 dev virbr0. As I didn’t have such a bridge on my VM, I figured I had to add it. I tried sever guides on adding a bridge to an Ubuntu server, but each one ended up messing up the networking, making the VM unreachable. I ended up re-creating my etcd1 so many times that I gave up and figured I try without the metadata server. I started the placement and etcd servers by running docker.sh, and then just on a lark I re-ran ip a. This time it showed:

ed@etcd1:~$ ip a
1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: ens3: mtu 1500 qdisc fq_codel state UP group default qlen 1000
link/ether fa:16:3e:90:6d:d0 brd ff:ff:ff:ff:ff:ff
inet 9.114.111.201/24 brd 9.114.111.255 scope global ens3
valid_lft forever preferred_lft forever
inet6 fe80::f816:3eff:fe90:6dd0/64 scope link
valid_lft forever preferred_lft forever
3: virbr0: mtu 1500 qdisc noqueue state DOWN group default qlen 1000
link/ether 52:54:00:35:c1:0d brd ff:ff:ff:ff:ff:ff
inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0
valid_lft forever preferred_lft forever
4: virbr0-nic: mtu 1500 qdisc fq_codel master virbr0 state DOWN group default qlen 1000
link/ether 52:54:00:35:c1:0d brd ff:ff:ff:ff:ff:ff

I’m not sure how those entries for ‘virbr0’ and ‘virbr0-nic’ got added (maybe docker added them?), but I wasn’t going to worry about that! So I ran the following commands, and they worked without a problem:

sudo ip addr add 169.254.169.254 dev virbr
sudo python md_server/mdserver/server.py mdserver.conf &

So now that the metadata server is running, time to try running ecompute on all the nodes. I use iTerm2, which has some sweet tools for splitting the terminal screen and running the same command in the different panes. I recorded a script of what happened:

I ran the command ecompute & on all the nodes to start the compute service in the background.

ed@etcd1:~/projects/etcd-compute(master)$ ecompute &
[1] 4661
ed@etcd1:~/projects/etcd-compute(master)$ 1556230694.3633301: PID: 4661 [None] {'uuid': '19a89e30-4bdd-49e7-b1a0-d4172bf7b289', 'placement': {'endpoint': 'http://etcd1:8080'}, 'etcd': {'host': 'etcd1'}, 'resize': False, 'bridge': 'br0'}
1556230694.364856: PID: 4661 [19a89e30-4bdd-49e7-b1a0-d4172bf7b289] {'VCPU': 4, 'DISK_GB': 77, 'MEMORY_MB': 7976}
1556230694.5012665: PID: 4661 [19a89e30-4bdd-49e7-b1a0-d4172bf7b289] Existing resource provider with gen 7 found with usages: VCPU: 0, MEMORY_MB: 0, DISK_GB: 0.

It’s interesting to see that because I had run this a few times earlier, etcd-compute recognized the UUID of the node, and noted that there was already an entry for that resource provider, with a generation of 7. If I were to stop that ecompute service and then re-start it, I would see the same as above, except this time the generation would be 8. That’s because when the service is killed, it changes the ‘reserved’ amount of its VCPU inventrory to the total amount, effectively preventing that node from being provisioned. That change increments the resource provider’s generation.

At about the 30-second mark, I tried to create a VM by running the command eschedule 'resources=VCPU:1,DISK_GB:1,MEMORY_MB:256' on the etcd3 node. That worked, and almost immediately you can see that it was scheduled to the etcd1 node, and the build process starts. However, there were many errors output, with the main one being error: failed to get domain ‘ff77fe58-e96a-498b-a3f5-a59030987238’. This is repeated several times, along with a bunch of network errors. So at this point I stopped the experiment.

There’s a lot I learned by going through all this, and I see many places where the etcd-compute project could be improved, starting with the documentation. I’d also like to get some less ethereal debugging output, so that when there are problems like I had spinning up a VM, they are recorded for later analysis. I’d also like to learn a lot more about the details of the networking required so that I can make sense of some of the networking errors.

The author of etcd-compute, Chris Dent, and I are hoping to have a mini-sprint on this project next week at the Open Infrastructure Summit in Denver, Colorado. If you will be there and want to join in the fun, drop me an email and I’ll let you know when we settle on a time and place.

Playing with etcd-compute

I’ve been interested in the etcd-compute project by Chris Dent. It’s sort of a lightweight virtual machine manager like OpenStack Nova, but without the complexity and cruft Nova has accumulated over the past 9 years. It takes advantage of technologies that simply didn’t exist in 2010 when Nova was created, using etcd‘s built-in notifications instead of passing large, complex objects over a message bus to make Remote Procedure Calls (RPC).

Keep in mind that Nova does a lot of things that etcd-compute can’t, so this isn’t a potential 1:1 replacement for Nova. But it does have potential as a much lighter replacement for those applications where the full power of Nova isn’t needed.

This post is designed to be obsolete within a week or so. What I’m aiming for is to record what worked for me following Chris’s instructions. Where I run into problems shows one of three things: our systems start out differently, or Chris assumed something that wasn’t in the README.md file, or my brain is not firing on all cylinders. It is my hope that this may help improve the installation instructions, and guide others who may wish to explore etcd-compute.

I don’t have a lot of hardware—ok, any hardware—at my disposal to experiment with, so I started by creating 3 Ubuntu 18.04 VMs in the internal OpenStack cloud for my team here at IBM. Yes, you can run virtualization on top of virtualization, and it’s turtles all the way down. But it does work! I named the instances etcd1, etcd2, and etcd3, with etcd1 being the controller and the others used as standard compute nodes.

There are some requirements—docker.io, virtinst, libvirt-daemon, libvirt-clients, and libguestfs-tools—that need to be installed on all the nodes, so I updated the distro packages and installed the requirements. Unfortunately, libvirtd wouldn’t start, and well, that’s kind of an important piece. So I cleaned house and tried again:

ed@etcd1:~$sudo aptitude purge libvirt-daemon
ed@etcd1:~$sudo apt install -y qemu qemu-kvm libvirt-bin  bridge-utils  virt-manager
ed@etcd1:~$ sudo systemctl enable libvirtd.service
Synchronizing state of libvirtd.service with SysV service script with /lib/systemd/systemd-sysv-install.
Executing: /lib/systemd/systemd-sysv-install enable libvirtd
Created symlink /etc/systemd/system/libvirt-bin.service → /lib/systemd/system/libvirtd.service.
Created symlink /etc/systemd/system/sockets.target.wants/virtlockd.socket → /lib/systemd/system/virtlockd.socket.
Created symlink /etc/systemd/system/sockets.target.wants/virtlogd.socket → /lib/systemd/system/virtlogd.socket.
ed@etcd1:~$ sudo systemctl start libvirtd.service
ed@etcd1:~$ sudo systemctl status libvirtd.service
● libvirtd.service - Virtualization daemon
Loaded: loaded (/lib/systemd/system/libvirtd.service; enabled; vendor preset: enabled)
Active: active (running) since Tue 2019-04-23 23:29:30 UTC; 5s ago
Docs: man:libvirtd(8)
https://libvirt.org
Main PID: 5289 (libvirtd)
Tasks: 19 (limit: 32768)
CGroup: /system.slice/libvirtd.service
├─4486 /usr/sbin/dnsmasq --conf-file=/var/lib/libvirt/dnsmasq/default.conf --leasefile-ro --dhcp-script=/usr/lib/libvirt/libvirt_
├─4487 /usr/sbin/dnsmasq --conf-file=/var/lib/libvirt/dnsmasq/default.conf --leasefile-ro --dhcp-script=/usr/lib/libvirt/libvirt_
└─5289 /usr/sbin/libvirtd
Apr 23 23:29:30 egleafe-etcdcompute-1 systemd[1]: Starting Virtualization daemon…
Apr 23 23:29:30 egleafe-etcdcompute-1 systemd[1]: Started Virtualization daemon.
Apr 23 23:29:30 egleafe-etcdcompute-1 dnsmasq[4486]: read /etc/hosts - 10 addresses
Apr 23 23:29:30 egleafe-etcdcompute-1 dnsmasq[4486]: read /var/lib/libvirt/dnsmasq/default.addnhosts - 0 addresses
Apr 23 23:29:30 egleafe-etcdcompute-1 dnsmasq-dhcp[4486]: read /var/lib/libvirt/dnsmasq/default.hostsfile
lines 1-17/17 (END)

So I guess it’s working now!

It’s also a pain to always have to use sudo to run docker commands, so add your user to the docker group. The command for this is sudo usermod -a -G docker ed, which adds user ‘ed’ to the group ‘docker’. You have to log out and log back in for it to take effect, but once you do, you can run commands like docker ps -a without sudo.

Also, in my experience I’ve run into various odd problems using the distro version of Python, so I prefer to install from source to get the latest Python (3.7.3 right now).

Being a creature of habit, I like having the project code I’m working with to be under a ~/projects directory. So for each of these instances, I ran the following:

ed@etcd1:~$ mkdir projects
ed@etcd1:~$ cd projects/
ed@etcd1:~/projects$ git clone https://github.com/cdent/etcd-compute.git
Cloning into 'etcd-compute'…
remote: Enumerating objects: 169, done.
remote: Counting objects: 100% (169/169), done.
remote: Compressing objects: 100% (107/107), done.
remote: Total 205 (delta 97), reused 124 (delta 61), pack-reused 36
Receiving objects: 100% (205/205), 54.58 KiB | 119.00 KiB/s, done.
Resolving deltas: 100% (107/107), done.
ed@etcd1:~/projects$ cd etcd-compute/
ed@etcd1:~/projects/etcd-compute(master)$

As the etcd-compute code has its own dependencies, those need to be installed by running sudo python setup.py develop. When I ran that the first time, I got an error when it was trying to install libvirt-python. I tried installing some other libvirt-related libraries and binaries, but I kept getting the same error. After a while I was trying anything I could think of, even running under Python 2! (didn’t work). Maybe it was something about Python 3.7 that was problematic, so I created a venv for Python 3.6, and ran pip install libvirt-python. It installed without a problem. Hmmm. So I fired up a Python 3.7 venv, and it also installed into that. It seems that the installation using setup.py was doing something different than a straight pip install. To test that, I got rid of the venvs, and ran sudo pip install libvirt-python, and it worked just fine. I was then able to install the rest of the dependencies by running sudo python setup.py develop.

Now that the dependencies are installed, we need to create the database for placement, and then modify the dockerenv file so that the OS_PLACEMENT_DATABASE__CONNECTION setting points to that. My database is on a MariaDB server, so I needed to change the value to:

mysql+pymysql://user:secret@my_host/placement

That means, of course, that I need to install pymysql using sudo pip install pymysql before I can make a connection. Once that’s done, I started the docker containers by running ./docker.sh from the primary VM. In my case, that’s etcd1.

That brings up another edit that’s needed on all your “machines”: changing the location of the host in compute.yaml and schedule.yaml. These assume that the host is named ‘ds1’, which isn’t true in this case. I changed ‘ds1’ to ‘etcd1’, and then added an entry in each node’s /etc/hosts file with the IP address of the etcd1 VM.

We also want to create a value for the uuid in compute.yaml. One simple way is to run python -c "import uuid; print(uuid.uuid4())", and copy the output to paste into compute.yaml. Do that on every compute node you are running.

That’s enough for one day. Tomorrow we start with configuring networking!

Stein PTG Recap

The OpenStack PTG for the Stein cycle was held in Denver this past week from September 10—14. And yes, it was at the same hotel as last year for the Queens PTG, complete with loud commuter train whistles. There was one clear theme that was expressed in different sessions across different teams:

“Not Enough Cycles”

It seemed that everyone has been stretched pretty thin by the demands of the upstream OpenStack work as well as the internal demands of their employers. As the New Car Smell™ has worn off of OpenStack, employers aren’t as willing to have their employees spend as much time on OpenStack projects, and several projects that were either in the planning stages or the early development cycle have had to be pushed aside for lack of time to work on them.

The API-SIG had its sessions on Monday, and one of the main topics slated for discussion was a perfect example of this: the effort to provide common healthcheck middleware across OpenStack projects. This would provide the benefit of allowing deployments to monitor all their cloud processes, and be able to detect when one of them is not running so they can automatically re-launch it. It’s a great idea, but it has stalled in the last few months due to the people who were working on it being re-tasked at their jobs on non-OpenStack projects. Since this effort may be of interest to the members of the Self-healing SIG, we will approach them to see if they may have people who can work on it. If anyone else feels strongly about this effort and does have available time, please reply on that review to let the original authors know, as they would be happy to help new people get up to speed with this.

We also discussed the GraphQL experiment, but unfortunately no one who is involved in this attended the PTG, so there wasn’t a lot of discussion. Oh, except to note that those involved have said that the effort has been slow because (you guessed it!) they don’t have enough cycles to focus on this.

We discussed design approaches that reduce the number of exceptions raised as a way to reduce complexity in code. For example, what should the behavior be when calling DELETE on a resource that doesn’t exist? The answer is that it depends on how you define what DELETE does. One possibility is that you locate a resource and then delete it; if the resource doesn’t exist, raise a 404 Not Found. The other is to define DELETE as “make sure that this resource doesn’t exist”. Under this approach, if the resource isn’t found, then Mission Accomplished! Not only does this make DELETE idempotent, it eliminates the need of everyone who calls the API to have to bracket each call in code like:

try:
    delete(my_resource)
except NotFound:
    pass

We agreed that in general, we should emphasize designs that minimize the complexity of code that calls an API. Most of the time when DELETE is called on a resource, the caller simply wants that resource gone. In the rare event that they need to ensure that the resource exists ahead of deleting it, they can do a HEAD or GET first. But in the vast majority of cases, there is no need to return a 404 if the resource doesn’t exist.

The last thing we addressed was the state of Monty Taylor’s patch for consuming version discovery. Once again, these have languished because Monty has been doing like a zillion other things. We agreed that, while not complete, there is a large amount of useful information there, so we will merge them so that they are available, and add some wording to indicate that they are still a work in progress. As they say, perfect is the enemy of the good.

There was one other event on Monday, and that was an impromptu meeting of the principle people involved in the process of extracting the Placement service into its own project. When Placement was created it was supposed to be separate from Nova, but people argued that for $REASONS it would be easier to start as part of Nova, and then later on be separated into its own project. Every cycle since then, the separation has been put off, because there were too many other things to get done, and because the effort required to separate Placement kept increasing as Placement grew. Six months ago at the PTG in Dublin, we agreed that we would finally do this as part of the Stein release. During the Rocky time frame, a lot of work was done by Chris Dent, and to a lesser degree myself, to determine just what the extraction process would require. So as soon as Rocky was released, we started the process of extracting the Placement code from Nova, and began talking about the project split. That’s when we ran into a wall: the current leaders of the Nova team accepted the code split, but were adamant that now was not the time for a governance split. This was confusing, as we had already agreed that the core team for the new Placement project would start off as the current Nova core team, so any code development would not be affected, but it seemed as though there was a fundamental mistrust that was not being expressed that was in the way.

So we had this meeting that was mediated by Mohammed Naser to figure out just what needed to be done before the Nova team would agree to allow the creation of the Placement project. We agreed (some of us reluctantly) on a set of technical milestones that needed to be achieved before Placement would be separated into its own project. The reluctance was the result of two things: the unlikelihood that some of the milestones would be completed any time soon, but also because the underlying cause of the mistrust was never acknowledged or discussed. So I’m happy that there is finally a path forward, but disappointed that the discussions couldn’t be more honest and forthcoming.

Tuesday was a cross-project day, with discussions between Nova/Placement and the Blazar, Cinder, and Ops teams. The Blazar discussions were interesting, as they are basically “consuming” resources by reserving them, and then parceling out those resources to individual reservations. It is too bad that discussions like this did not happen when the Placement design discussions happened over the past few years, as it would have been nice to consider this use case. As it is now, there really isn’t a clean way to handle that in Placement.

Wednesday was the start of the three days of Nova discussions. If you want to see the details of what topics were discussed, and various input people had, you can read the etherpad tracking the schedule. We started off with the standard retrospective discussion, which covered many of the same things we normally cover, and produced the typical “let’s do better” resolutions. There was no “how can we be a better team” sort of discussions, because frankly we’ve tried to have them before, and they quickly turn into defensive posturing instead of positive discussion, so no one was interested in going through that again.

The Placement discussions were next, and covered many topics, but we still only got part-way through the list. Much of the early discussion covered the state of extraction and what else needs to happen to have a fully independent repo. We also covered the desire by some on the Nova team to put more Nova-centric information into Placement, so that Nova could do things like quota counting and the like. Personally, I would strongly prefer that Nova-specific information be stored in Nova, but for now it seems like that distinction isn’t very important to others. I didn’t argue these points very much in person, as these in-person discussions tend to devolve quickly since everyone has a slightly different understanding of what is being proposed, and we tend to talk past each other. I hope to persuade more once actual specs with concrete proposals are available for review.

Wednesday afternoon was mostly discussions of Cells v2. Frankly, I didn’t pay close attention to most of it, as I have little interest in this topic. It always seemed odd to design a distributed system like cells and not use a distributed database. So instead I started writing this blog post, and reviewed some Placement patches in gerrit. Fortunately, the cells discussions ended early, and there was time to have more Placement discussions. One thing that involved more disagreement than I expected was how to handle a potential new library to handle standard resource classes. There is already the os-traits library for enumerating standard traits, so creating an os-resource-classes lib seemed like it would be uncontroversial. However, there was an objection to now having two separate things when both were pretty lightweight. OK, then let’s combine them into a new os-placement library, right? No, not so simple. There was concern that packagers would have to edit their packaging scripts, so it was proposed that the resource classes be added to the os-traits library. In other words, to work with traits, you’d use os-traits. To work with resource classes, you’d use os-traits. Wait, what?? This, in my opinion, is a great example of short-term thinking: making life a little easier for a few people now, in return for confusing the hell out of everyone who will have to use it for years in the future by having a misleading name.

Thursday morning was the Nova – Cinder discussions. Once again, this isn’t an area I’m active in, so I listened with one ear while reviewing Placement code. The discussions surrounding the transfer of ownership of an in-use volume, though, caught my attention. It is something that cloud operators seem to really want, but there are a bunch of technical hurdles, as Cinder doesn’t allow transfer of either in-use or encrypted volumes. Operators are doing it using a variety of hacks, so it was agreed that we need to provide them a way to get this done.

There were some good Nova – Cyborg discussions, both on Monday morning and again on Thursday before lunch. These concerned themselves with issues such as which service “owns” the accelerator devices, and how to configure that. I won’t go into details here, but you can read the etherpad if you want more information.

Thursday afternoon had two more joint sessions: Nova – Neutron, and Nova – Ironic. The etherpad (starting around line 563) contains the topics and the resolutions from those meetings; again, as I’m not working on those areas, I only half-paid attention. Friday was set aside for a variety of miscellaneous topics; too many to list here. It seemed like, as in past PTGs, people were burnt out after days of intense discussions. The Nova room was half-empty, and the common areas seemed relatively empty. I suppose many people left for home by then.

This was the last “pure” PTG. Starting next spring, the PTG will take place alongside the OpenStack Summit; the exact days haven’t been announced, but the general assumption is that there will be 3 days for the summit, and 3 or 4 days for the PTG, and these days may or may not overlap. The thinking is that it will reduce the number of times that people have to fly, since many attend both events. I’ll have to say that, while I understand the financial realities, this will be a step backwards. Having the PTG at the start of the cycle helps with focus for a project, and not having the distractions of the Summit is a big plus. But the reality is that companies aren’t approving travel for events that don’t involve customer interaction, and many saw the PTG as not important for that reason. That kind of short-sightedness is disappointing, as OpenStack as a whole will suffer as a result.

The Denver area is surrounded by some outstanding natural beauty. After the PTG was done, we took several days to explore and enjoy several of these treasures, such as the Rocky Mountain Arsenal National Wildlife Refuge, the Rocky Mountain National Park, and the Garden of the Gods in Colorado Springs. If you ever visit the area, be sure not to miss out these treasures!

mountain selfie
Linda and I enjoying the beauty of Rocky Mountain National Park.