This coming Monday I’m having total knee replacement surgery. Neither of my knees is very healthy, but my left knee has been particularly painful. It’s been over a decade since there has been any cartilage between the bones of the knee, and all that wear and tear has taken its toll.
I tried to get my actual x-ray to illustrate this post, but that proved to be difficult, so here’s one I got off of Google Images:
Normally the bones are separated by cartilage, which allows them to move without much friction. My left knee’s x-ray looks almost exactly like the image on the right. I have had to have all the cartilage in that joint removed over the years, and now it’s “bone on bone”.
I’ve written about my physical ailments before, and for the past two years I’ve been despondent over this decline. Then this past November came the email from the state soccer referee representative informing us of the upcoming registration and re-certification for the 2019 season. I thought about my complete lack of involvement in soccer over the past two years: I reffed a few games in early 2017, and didn’t ref a single game in 2018. I had decided that I should face the truth and retire. I told my wife and family, and felt at peace at finally accepting that this was something I simply could no longer do.
But a couple of weeks later it started gnawing at me. I didn’t want to give this up without a fight. I told my wife that I was thinking about getting a knee replacement, with the goal of being able to ref a few games by the end of 2019. She was 100% behind me, so after the holidays I started looking around for a surgeon. After many hours researching knee surgeons in San Antonio, I found Dr. David Fox, and set up an appointment. We discussed what would be involved, and he didn’t sugar-coat anything. He told me to “expect 6 weeks of hell” after the surgery, as the recovery process involves doing a lot of physical therapy exercises that can be painful. Normally, people undergoing this surgery have to take 3–4 weeks (and sometimes longer) off of work, but as I work from my home, I can be back at work as soon as I’m off my pain meds and mentally clear.
I’ll be sure to follow the course of the surgery and recovery process in future posts. Now I’m ready for my 6 weeks of hell!
This is a quick demonstration of how to create a virtual environment in Python3. I’m starting with an empty directory in ~/projects/demo. I then run the command to create a virtual environment:
ed@imac:~/projects/demo$ ll ed@imac:~/projects/demo$ python3 -m venv my_env ed@imac:~/projects/demo$ ll total 0 drwxr-xr-x 6 ed staff 192B Jan 24 18:14 my_env
Note that the command created a directory with the name I gave it: ‘my_env’. Next we have to activate it. ‘Activate’ changes Python’s internal references to look for things such as which Python version to run, and where installed modules are placed.
I have a bash script that changes the prompt to show the current Python environment; notice that after activating the prompt now starts with ‘(myenv)’.
Installed modules are located in the ‘site-packages’ subdirectory that’s a few levels deep. Let’s see what’s in this fresh virtual env’s site-packages:
(my_env)ed@imac:~/projects/demo$ ll my_env/lib/python3.6/site-packages/ total 8 drwxr-xr-x 3 ed staff 96B Jan 24 18:14 pycache -rw-r--r-- 1 ed staff 126B Jan 24 18:14 easy_install.py drwxr-xr-x 23 ed staff 736B Jan 24 18:14 pip drwxr-xr-x 10 ed staff 320B Jan 24 18:14 pip-9.0.1.dist-info drwxr-xr-x 6 ed staff 192B Jan 24 18:14 pkg_resources drwxr-xr-x 34 ed staff 1.1K Jan 24 18:14 setuptools drwxr-xr-x 12 ed staff 384B Jan 24 18:14 setuptools-28.8.0.dist-info
One of my favorite development tools for Python is the pudb debugger. To show that we can install a package, let’s try importing it first (and failing):
(my_env)ed@family-imac:~/projects/demo$ python Python 3.6.4 (v3.6.4:d48ecebad5, Dec 18 2017, 21:07:28) [GCC 4.2.1 (Apple Inc. build 5666) (dot 3)] on darwin Type "help", "copyright", "credits" or "license" for more information. >>> import pudb Traceback (most recent call last): File "", line 1, in ModuleNotFoundError: No module named 'pudb' >>> ^D (my_env)ed@family-imac:~/projects/demo$
Let’s look at the site-packages directory after installing pudb:
(my_env)ed@family-imac:~/projects/demo$ ll my_env/lib/python3.6/site-packages/ total 8 drwxr-xr-x 10 ed staff 320B Jan 24 18:38 Pygments-2.3.1.dist-info drwxr-xr-x 3 ed staff 96B Jan 24 18:14 pycache -rw-r--r-- 1 ed staff 126B Jan 24 18:14 easy_install.py drwxr-xr-x 7 ed staff 224B Jan 24 18:35 pip drwxr-xr-x 9 ed staff 288B Jan 24 18:35 pip-19.0.1.dist-info drwxr-xr-x 6 ed staff 192B Jan 24 18:14 pkg_resources drwxr-xr-x 18 ed staff 576B Jan 24 18:38 pudb drwxr-xr-x 9 ed staff 288B Jan 24 18:38 pudb-2018.1.dist-info drwxr-xr-x 22 ed staff 704B Jan 24 18:38 pygments drwxr-xr-x 34 ed staff 1.1K Jan 24 18:14 setuptools drwxr-xr-x 12 ed staff 384B Jan 24 18:14 setuptools-28.8.0.dist-info drwxr-xr-x 33 ed staff 1.0K Jan 24 18:38 urwid drwxr-xr-x 7 ed staff 224B Jan 24 18:38 urwid-2.0.1.dist-info (my_env)ed@family-imac:~/projects/demo$ python
Note that there are now entries for both pudb and its dependency, pygments. And to verify that it has been successfully installed:
(my_env)ed@family-imac:~/projects/demo$ python Python 3.6.4 (v3.6.4:d48ecebad5, Dec 18 2017, 21:07:28) [GCC 4.2.1 (Apple Inc. build 5666) (dot 3)] on darwin Type "help", "copyright", "credits" or "license" for more information. >>> import pudb >>> pudb.version '2018.1' >>> ^D (my_env)ed@family-imac:~/projects/demo$
There’s a ton more to using virtual environments, but that should give you a start.
A few weeks ago I was fortunate enough to attend a show by David Byrne. I have seen him perform many times before, and he always put on a great show. He didn’t let us down this time either. After the last number, they bowed and walked off-stage to a standing ovation. The applause continued for a few minutes, and was then rewarded by an encore.
Did I worry that my applause wouldn’t be the decisive amount of noise to ensure that there was an encore? Of course not. When you are a member of the audience of a performance, it’s not about you, it’s about everyone. Of course if I didn’t clap they would have done the encore anyway, and if I had clapped twice as hard it wouldn’t have changed anything. The problem is expecting individual effects in a group context.
Voting is the same thing. It isn’t about you, the voter. It’s about everyone, the voters. When an election is held, vote your preference. Odds are, sure, it won’t make a difference. Your vote won’t be the single event that changes history. But it’s not supposed to be. Assuming that unless it is, it doesn’t matter is completely missing the point. Just as applauding (or not) for an encore, it is the response of the group that matters, not any single member of the group.
If you have the privilege to vote, and haven’t done so yet, make the effort to do so tomorrow. The collective action of those who might feel that they are powerless wields a hell of a lot of power.
With the mid-term elections coming up in a few weeks here in the US, many of us are hoping for the “blue wave” that will help to counteract the extreme direction that the Trump regime has pulled this country. But having observed how things have been operating for the past two years, I can’t help but feel a sense of dread about what will happen.
One thing that has plagued politicians is being involved in a scandal. But in the Trump era, there are new scandals every day, sometimes so many at once that it’s hard to keep up. And I think that’s the plan: overwhelm people so that no single scandal gets any attention.
So this dread I feel is that on Election Day, there won’t be a few irregularities about the vote; there will be thousands and thousands. There will be so many that it won’t be possible to investigate them all. We will have no certainty about the results. There will be cries of fraud, but instead of taking them seriously, they will be met with the standard “you lost; get over it”. And as a result, the political arena will become more extreme than ever before.
I have never hoped that I am wrong as strongly as I do now.
The OpenStack PTG for the Stein cycle was held in Denver this past week from September 10—14. And yes, it was at the same hotel as last year for the Queens PTG, complete with loud commuter train whistles. There was one clear theme that was expressed in different sessions across different teams:
“Not Enough Cycles”
It seemed that everyone has been stretched pretty thin by the demands of the upstream OpenStack work as well as the internal demands of their employers. As the New Car Smell™ has worn off of OpenStack, employers aren’t as willing to have their employees spend as much time on OpenStack projects, and several projects that were either in the planning stages or the early development cycle have had to be pushed aside for lack of time to work on them.
The API-SIG had its sessions on Monday, and one of the main topics slated for discussion was a perfect example of this: the effort to provide common healthcheck middleware across OpenStack projects. This would provide the benefit of allowing deployments to monitor all their cloud processes, and be able to detect when one of them is not running so they can automatically re-launch it. It’s a great idea, but it has stalled in the last few months due to the people who were working on it being re-tasked at their jobs on non-OpenStack projects. Since this effort may be of interest to the members of the Self-healing SIG, we will approach them to see if they may have people who can work on it. If anyone else feels strongly about this effort and does have available time, please reply on that review to let the original authors know, as they would be happy to help new people get up to speed with this.
We also discussed the GraphQL experiment, but unfortunately no one who is involved in this attended the PTG, so there wasn’t a lot of discussion. Oh, except to note that those involved have said that the effort has been slow because (you guessed it!) they don’t have enough cycles to focus on this.
We discussed design approaches that reduce the number of exceptions raised as a way to reduce complexity in code. For example, what should the behavior be when calling DELETE on a resource that doesn’t exist? The answer is that it depends on how you define what DELETE does. One possibility is that you locate a resource and then delete it; if the resource doesn’t exist, raise a 404 Not Found. The other is to define DELETE as “make sure that this resource doesn’t exist”. Under this approach, if the resource isn’t found, then Mission Accomplished! Not only does this make DELETE idempotent, it eliminates the need of everyone who calls the API to have to bracket each call in code like:
We agreed that in general, we should emphasize designs that minimize the complexity of code that calls an API. Most of the time when DELETE is called on a resource, the caller simply wants that resource gone. In the rare event that they need to ensure that the resource exists ahead of deleting it, they can do a HEAD or GET first. But in the vast majority of cases, there is no need to return a 404 if the resource doesn’t exist.
The last thing we addressed was the state of Monty Taylor’s patch for consuming version discovery. Once again, these have languished because Monty has been doing like a zillion other things. We agreed that, while not complete, there is a large amount of useful information there, so we will merge them so that they are available, and add some wording to indicate that they are still a work in progress. As they say, perfect is the enemy of the good.
There was one other event on Monday, and that was an impromptu meeting of the principle people involved in the process of extracting the Placement service into its own project. When Placement was created it was supposed to be separate from Nova, but people argued that for $REASONS it would be easier to start as part of Nova, and then later on be separated into its own project. Every cycle since then, the separation has been put off, because there were too many other things to get done, and because the effort required to separate Placement kept increasing as Placement grew. Six months ago at the PTG in Dublin, we agreed that we would finally do this as part of the Stein release. During the Rocky time frame, a lot of work was done by Chris Dent, and to a lesser degree myself, to determine just what the extraction process would require. So as soon as Rocky was released, we started the process of extracting the Placement code from Nova, and began talking about the project split. That’s when we ran into a wall: the current leaders of the Nova team accepted the code split, but were adamant that now was not the time for a governance split. This was confusing, as we had already agreed that the core team for the new Placement project would start off as the current Nova core team, so any code development would not be affected, but it seemed as though there was a fundamental mistrust that was not being expressed that was in the way.
So we had this meeting that was mediated by Mohammed Naser to figure out just what needed to be done before the Nova team would agree to allow the creation of the Placement project. We agreed (some of us reluctantly) on a set of technical milestones that needed to be achieved before Placement would be separated into its own project. The reluctance was the result of two things: the unlikelihood that some of the milestones would be completed any time soon, but also because the underlying cause of the mistrust was never acknowledged or discussed. So I’m happy that there is finally a path forward, but disappointed that the discussions couldn’t be more honest and forthcoming.
Tuesday was a cross-project day, with discussions between Nova/Placement and the Blazar, Cinder, and Ops teams. The Blazar discussions were interesting, as they are basically “consuming” resources by reserving them, and then parceling out those resources to individual reservations. It is too bad that discussions like this did not happen when the Placement design discussions happened over the past few years, as it would have been nice to consider this use case. As it is now, there really isn’t a clean way to handle that in Placement.
Wednesday was the start of the three days of Nova discussions. If you want to see the details of what topics were discussed, and various input people had, you can read the etherpad tracking the schedule. We started off with the standard retrospective discussion, which covered many of the same things we normally cover, and produced the typical “let’s do better” resolutions. There was no “how can we be a better team” sort of discussions, because frankly we’ve tried to have them before, and they quickly turn into defensive posturing instead of positive discussion, so no one was interested in going through that again.
The Placement discussions were next, and covered many topics, but we still only got part-way through the list. Much of the early discussion covered the state of extraction and what else needs to happen to have a fully independent repo. We also covered the desire by some on the Nova team to put more Nova-centric information into Placement, so that Nova could do things like quota counting and the like. Personally, I would strongly prefer that Nova-specific information be stored in Nova, but for now it seems like that distinction isn’t very important to others. I didn’t argue these points very much in person, as these in-person discussions tend to devolve quickly since everyone has a slightly different understanding of what is being proposed, and we tend to talk past each other. I hope to persuade more once actual specs with concrete proposals are available for review.
Wednesday afternoon was mostly discussions of Cells v2. Frankly, I didn’t pay close attention to most of it, as I have little interest in this topic. It always seemed odd to design a distributed system like cells and not use a distributed database. So instead I started writing this blog post, and reviewed some Placement patches in gerrit. Fortunately, the cells discussions ended early, and there was time to have more Placement discussions. One thing that involved more disagreement than I expected was how to handle a potential new library to handle standard resource classes. There is already the os-traits library for enumerating standard traits, so creating an os-resource-classes lib seemed like it would be uncontroversial. However, there was an objection to now having two separate things when both were pretty lightweight. OK, then let’s combine them into a new os-placement library, right? No, not so simple. There was concern that packagers would have to edit their packaging scripts, so it was proposed that the resource classes be added to the os-traits library. In other words, to work with traits, you’d use os-traits. To work with resource classes, you’d use os-traits. Wait, what?? This, in my opinion, is a great example of short-term thinking: making life a little easier for a few people now, in return for confusing the hell out of everyone who will have to use it for years in the future by having a misleading name.
Thursday morning was the Nova – Cinder discussions. Once again, this isn’t an area I’m active in, so I listened with one ear while reviewing Placement code. The discussions surrounding the transfer of ownership of an in-use volume, though, caught my attention. It is something that cloud operators seem to really want, but there are a bunch of technical hurdles, as Cinder doesn’t allow transfer of either in-use or encrypted volumes. Operators are doing it using a variety of hacks, so it was agreed that we need to provide them a way to get this done.
There were some good Nova – Cyborg discussions, both on Monday morning and again on Thursday before lunch. These concerned themselves with issues such as which service “owns” the accelerator devices, and how to configure that. I won’t go into details here, but you can read the etherpad if you want more information.
Thursday afternoon had two more joint sessions: Nova – Neutron, and Nova – Ironic. The etherpad (starting around line 563) contains the topics and the resolutions from those meetings; again, as I’m not working on those areas, I only half-paid attention. Friday was set aside for a variety of miscellaneous topics; too many to list here. It seemed like, as in past PTGs, people were burnt out after days of intense discussions. The Nova room was half-empty, and the common areas seemed relatively empty. I suppose many people left for home by then.
This was the last “pure” PTG. Starting next spring, the PTG will take place alongside the OpenStack Summit; the exact days haven’t been announced, but the general assumption is that there will be 3 days for the summit, and 3 or 4 days for the PTG, and these days may or may not overlap. The thinking is that it will reduce the number of times that people have to fly, since many attend both events. I’ll have to say that, while I understand the financial realities, this will be a step backwards. Having the PTG at the start of the cycle helps with focus for a project, and not having the distractions of the Summit is a big plus. But the reality is that companies aren’t approving travel for events that don’t involve customer interaction, and many saw the PTG as not important for that reason. That kind of short-sightedness is disappointing, as OpenStack as a whole will suffer as a result.