Day 47: Python Virtualenvs

If you’re not a Python programmer, you probably won’t find much in this post. Sorry.

If you are a Python programmer, then you probably know about virtualenvs, or virtual environments. They allow you to create several different Python environments to work in: each can have it’s own version of Python, as well as its own installed packages. This means I can work on a project that has particular requirements, and then switch to a project with completely different requirements, and the two won’t affect each other.

A while ago I used to use vitualenvwrapper, which made working with virtualenvs a lot easier. But as I switched to Python 3 over the past few years, I started to have some issues where it didn’t work correctly (sorry, I no longer recall precisely what those issues were). I’m sure a lot had to do with the addition of the venv module into Python 3, which allowed you to create a virtualenv by running python -m venv /path/to/virtualenv.

For a while I ran all the commands manually, but I have the quality that makes for good programmers: I’m lazy. A lazy person doesn’t want to do any more work than they have to, so if I can automate something to save time over the long run, I will. And here’s what I came up with:

export VENV_HOME=$HOME/venvs

function workon { 
    if [ -z $1 ]
    then
        ls -1 $VENV_HOME
    else
        source $VENV_HOME/$1/bin/activate
    fi
}

function mkvenv {
    python3 -m venv $VENV_HOME/$1
    workon $1
    pip install -U pip setuptools wheel
    pip install ipython pytest-pudb requests
}

function rmvenv {
    command -v deactivate
    rm -rf $VENV_HOME/$1
}

_venvdirs()
{
    local cur="$2"
    COMPREPLY=( $(cd $HOME/venvs && compgen -d -- "${cur}" ) );
}
complete -F _venvdirs workon rmvenv 
Code language: Bash (bash)

These lines should be added to your .bashrc in Linux, or your .bash_profile for Macs. I haven’t tried them with zsh yet, so no guarantees there. Let’s go over what these lines do.

Line #1 defines the directory where the virtualenvs will be stored. You can store them anywhere; it doesn’t make any difference.

Lines #3–10 define the workon function, which activates the specified virtualenv, or lists all virtualenvs if none is specified. Lines 12–17 define the mkvenv function, which creates a new venv, and lines 19–22 define rmvenv, which deletes virtualenvs when you no longer need them.

I’d like to point out 2 lines in mkvenv that you can customize. Line #15 updates the installed versions of pip, setuptools, and wheel. If for some reason you don’t want the latest versions of these, edit or remove that line.

Line #16 is more interesting: nearly every virtualenv I create needs these packages installed. Rather than install them one-by-one, I add them when I create the virtualenv. If you have different packages you always want available, edit this line.

Finally, lines #24–29 are of utmost importance to someone lazy like myself: they provide auto-completion for the other commands. I had to learn about bash completion to get that working, but it turned out to be much easier than I had imagined.

Here is a gif showing it in action:

Try it out! Let me know if you find this useful, or if you have suggestions for improvement.

Day 44: Employed Once More!

After 3 1/2 months of unemployment, during which I submitted countless job applications, became a regular on LinkedIn, learned the routines of the Texas Unemployment Benefits system, and sat through numerous interviews, I’m excited to report that I have a new job!

In a couple of weeks I will be starting at Nvidia as a Senior Python Developer, working on the tools for their GPU cloud. I’ve met the other people on my team during the video interview process, and they all seem like a bright bunch, so I can’t wait to start working with them!

It’s been difficult these last few months. It started with the pandemic and subsequent lockdown, which has affected everyone. Then came the layoff, with DataRobot letting 25% of its workforce go, including yours truly. It really wasn’t much consolation that I was only 1 of the 40 million or so in the US who lost their job in those few weeks – it still hurt.

Still, I have had it better than most. My wife still had her job, which was super-important financially. We also had some savings, so we weren’t living paycheck-to-paycheck like so many Americans have to. And it did give me some free time to work on my photoviewer software, and practice my newly-discovered sport of disc golf. It also gave me the chance to perfect my sourdough bread technique (yeah, I know – how cliché!). But there is only so much to do when largely confined to the house.

Which is why I started this daily writing exercise. Not just to fill the time, but to get down some of the thoughts that have been in my head for a while, and polish my rusty writing skills. And while it’s been difficult to always find something to write about, I have noticed that writing itself is feeling more fluid.

I will continue this daily project until I start the job on July 20. After that, I will continue to write, but just not on a daily basis. Going through this exercise has helped me enjoy writing more, and improved my ability to let a piece out into the wild without first obsessing with endless editing. That is probably the best thing I’ve gotten out of it.

Why Go to a Tech Conference?

Good question! It does seem unnecessary, especially since most major conferences record every talk and make them freely available online. PyCon has been doing this for many years, and are so good at it that the talks are available online shortly after they are finished! So there’s no real penalty for waiting until you can watch it online.

I suppose that if you look at tech conferences as simply dry tutorials on some new tool or technique, the answer would be “no, you should save your money and watch the sessions at home”. But there are much bigger benefits to attending a conference than just the knowledge available at the talks. I like to think of it as pressing the restart button on my thinking as a developer. By taking advantage of these additional avenues of learning, I come away with a different perspective on things: new tools, new ways of using existing tools, different approaches to solving development issues, and so much more that is intangible. Limiting yourself to the tangible resources of a conference means that you’re missing out. So what are these intangible things?

One of the most important is meeting people. Not so much to build your social network, but more to expand your understanding of different approaches to development. The people there may be strangers, but you know that you have at least one thing in common with them, so it’s easy to start conversations. I’ve been to 14 PyCons, and at lunch I make it a point to sit at tables where I don’t know anyone, and ask the people there “So what do you use Python for?”. Invariably they use it in ways that I had never thought about, or to solve problems that I had never worked on. The conversation can then move on to “Where are you from?”; people usually love to brag about their home town, and you might learn a few interesting things about a place you’ve never been to. Many people also go out to dinner in groups, usually with people who know each other, but I always try to look for people who are alone, and invite them to join our group.

Another major benefit of attending in person is what is known as the “hallway track”. These are the unscheduled discussions that occur in the hallways between sessions; sometimes they are a continuation of discussions that were held in a previous talk, and other times they are simply a bunch of people exchanging ideas. Some of the best technical takeaways I’ve gotten from conferences have come from these hallway discussions. When you’ve been to as many PyCons as I have, there are many people I run into who I haven’t seen since the last PyCon, and we can catch up on what’s new in each other’s lives and careers. Like the lunchtime table discussions, these are opportunities to learn about techniques and approaches that are different than what you regularly do.

Closely related to the above is the “bar track”. Most conferences have a main hotel for attendees, and in the evening you can find lots of people hanging out in the bar. The discussions there tend to have a bit less technical content, for obvious reasons, but I’ve been part of some very technical discussions where the participants are all on their third beer or so. But even if you don’t drink alcohol, you can certainly enjoy hanging out with your fellow developers in the evening. Or, of course, you can use that time to recharge your mental batteries.

Yet another opportunity at a conference is to enhance your career. There is usually some form of formal recruiting; if you’re looking for a change of career, this can be a valuable place to start. I’ve heard some managers say that they won’t send their developers to conferences because they are afraid that someone will hire them; it makes you wonder why they think their developers are not happy with their current job! But even if you’re not looking to make a career move at the moment, establishing relationships with others in your field can come in handy in the future if your job suddenly disappears. You can also learn what companies are looking for skills that match yours; I was surprised to learn that companies as diverse as Disney, Capital One, Yelp, and Bloomberg are all looking for Python talent. As an example, back at PyCon 2016 I met with some people recruiting for DataRobot, and while I didn’t pursue things then, they made a good impression on me. When I was looking for a change last year and got a LinkedIn message from a recruiter at DataRobot, I remembered them well, and this time I followed up, with the result that I’m now happily employed by DataRobot!

Unfortunately, I’ve seen people who arrive to a conference with a group of co-workers, attend the sessions, eat with each other at lunch, and then go out to dinner together. By isolating themselves and confining their learning to the scheduled talks, they are missing out on the most valuable part of attending a conference: interacting with your community, and sharing knowledge with your peers. If this sounds like you, I would advise you to try out some of the things I’ve mentioned here. I’m sure you will find that your conference experience is greatly improved!

Open Infrastructure Summit, Denver 2019

The first ever Open Infrastructure Summit was held in the last week of April 2019 at the Colorado Convention Center in Denver, CO. It’s the first since the re-branding from OpenStack to Open Infrastructure began last year to be officially held with the new name. Otherwise, it felt just like the OpenStack summits of old.

The keynotes were better than in prior summits – I think the sponsors got the feedback that no one was interested in sitting through a recap of “how they did X with OpenStack”, and instead focused more on what they intended to do with it. There was a great demo by Chris Hoge and Julia Kreger that showed a kubernetes operator managing a bare metal infrastructure; it showed very clearly that the typical media message around “Kubernetes is replacing OpenStack” is silly. They exist in different problem spaces, and work well together. The only place Kubernetes is replacing OpenStack is in the hype cycle.

After the keynotes I went to the Nova Project Update session. It was very thorough, but felt more like someone reading release notes out loud. I had hoped for more of a discussion about the thinking that went into some of the things that were worked on or are being planned rather than just a straight recitation.

After that was lunch – sort of. For the first time since these summits began, lunch was not provided. Instead, you were supposed to go to one of the many restaurants in the area and buy your own lunch. However, since we had pretty poor weather—freezing temperatures, snow, and rain—walking around downtown Denver wasn’t what I felt like doing. Judging by how packed the restaurant in the hotel across the street was, a lot of other people felt the same way. I understand that times are not as heady as in previous years when OpenStack was the latest hotness, but this seemed like a poor place to cut back. I always enjoyed sharing a table with a bunch of other OpenStackers and learning about where they were from and what they were doing with OpenStack. Going out to lunch meant that people tended to stay with groups they already knew. The afternoon snacks were also gone, which is no big deal for me, but others mentioned to me that they missed having them. Finally, they didn’t have a signature piece of conference swag. I’m typing this wearing the OpenStack hoodie I got back in the Paris 2014 summit, and have my sweatshirt from Tokyo 2015 in my room. Well, OK, they did give out a pair of socks, but they weren’t tied to the event. It’s not a huge thing, but not having something this time really makes things feel… different. And not in a good way.

There weren’t any sessions in the afternoon that I really wanted to go to, so instead I worked on two OpenStack-related projects: etcd-compute and using Graph Databases, such as Neo4j, to hold information for the Placement service. I have previously written about my work with both of these. And since the author of etcd-compute, Chris Dent, was also here at the summit, it was a perfect time to work on it together, so I set up several VMs for us to “play with”.

Monday evening after the sessions was the “Marketplace Mixer”, which is a way to get the attendees to visit the vendor area. They provided food and beverages, and I had my badge scanned several times in exchange for some local craft beer. There wasn’t a lot offered by the vendors that would be useful to me, but I did run into a lot of people I knew. When you’re in your 10th year of working on OpenStack, you get to know quite a few people!

On Tuesday I started with a session on Nova-Cyborg integration. Or at least that was what it was advertised as. It turned out to be more of an “Introduction to Cyborg Concepts” talk, rather than focusing on where the two projects needed to integrate.

cyborg-nova
The crowd at the Cyborg-Nova integration session

Later on was the API-SIG BoF (Birds of a Feather) session that I headed up. There hadn’t been much traffic in the SIG ahead of the summit, so I was happily surprised when several people showed up. We ended up having a good discussion on a variety of API-related topics, and I got to meet several of the people who have joined in some of the more recent IRC discussions and Office Hours who previously I had only known by their IRC handles. It’s always nice to put a face to a name.

In the afternoon was a session to update everyone on the process of extracting Placement from Nova. In the past this has been a somewhat heated topic, but this time everyone seemed to understand where things were and were pretty cool with it. There weren’t any long discussions, so the session finished early. I guess that’s a very good sign that we handed that process well.

The final session of the afternoon was to discuss what the various SIGs (Special Interest Groups) and WGs (Working Groups) needed to be successful. Since the API-SIG has been around for many years, we didn’t really have any needs along these lines. Sure, it would be great to get more people involved, but it isn’t critical. Some of the newer groups explored ways of getting the word out about their existence, which is always a problem. There is so much going on in the OpenStack world that getting people to pay attention to yet another thing is always challenging.

That evening was the Open Infrastructure party, sponsored by Trilio, Mirantis, Red Hat, Open Telekom Cloud, & AVI Networks. It was held in The Church Nightclub, which is an old church that has been converted to a nightclub. There was an open bar and food available, and they had a band playing for entertainment. The location was fun, but being indoors with loud music meant that there was only so much conversation you could have. Still, it was fun!

Open Infrastructure Party
The crowd at the Open Infrastructure Party at the Church Niteclub
church niteclub
A view from higher up that shows how an old church was converted into a niteclub. You can see the some of the band playing at the very bottom.

There weren’t any talks on Wednesday morning that I really wanted to attend, so I spent most of the morning in the designated hacking room working on the etcd-compute project for a while, and then on implementing many of the features that are currently lacking in Placement in my graph database code. I managed to implement passing a tree structure to represent nested resource providers so that it creates the corresponding nodes and relationships in the database. This implementation is becoming more and more complete, and I hope when I show it to others this week that they are able to get out of their MySQL comfort zone and see how much better this approach is for representing resources.

I went to lunch with some of the members of my team at IBM who were at the Summit, along with some people from Red Hat with whom we are working to ensure that their various offerings run as well on Power hardware as on x86. So while the pizza was tasty, it was definitely a working lunch. It was also great to meet some of the people I had only known online before.

The Red Hat – IBM lunch *after* the food had been eaten.

After lunch was a session focused on the gaps between Nova functionality and what has been implemented in OpenStack Client. Most of the missing functionality is concerned with supporting new microversions, and this support is several years behind. I’m not sure how effective the discussions were, since what is really needed is for people to take ownership of some of the needed tasks, and I didn’t hear a lot of that happening.

After that I went to the Cyborg Project Update. Once again, it probably would have been much more useful to anyone who hadn’t been following along with the project, so while I didn’t get much from it, there was a lot of information presented on the current state and future plans for Cyborg.

And that was it! The end of another Summit, even if it was the first. That evening I met my sister for dinner. She lives in the Denver area, and it was great to catch up with her and spend some time relaxing after 3 long days. But the relaxation will be short-lived, as the Train PTG starts first thing tomorrow morning!

Geri & Ed
Selfie with my big sister Geri

More fun with etcd-compute

Last time I ended my work getting etcd-compute running at the point where I needed to configure the virtual networking. I’ve been busy the past few days with meetings and other work-related stuff, so it’s taken me a while to continue on this experiment. But I have some time now; let’s jump back in!

The reason I thought that I needed to set up virtual networking was that when I ran ip a on my controller node, all I had was the loopback and main ethernet interfaces. The directions for etcd-compute talked about setting up the metadata server by adding the IP address it uses to a virtual bridge: sudo ip addr add 169.254.169.254 dev virbr0. As I didn’t have such a bridge on my VM, I figured I had to add it. I tried sever guides on adding a bridge to an Ubuntu server, but each one ended up messing up the networking, making the VM unreachable. I ended up re-creating my etcd1 so many times that I gave up and figured I try without the metadata server. I started the placement and etcd servers by running docker.sh, and then just on a lark I re-ran ip a. This time it showed:

ed@etcd1:~$ ip a
1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: ens3: mtu 1500 qdisc fq_codel state UP group default qlen 1000
link/ether fa:16:3e:90:6d:d0 brd ff:ff:ff:ff:ff:ff
inet 9.114.111.201/24 brd 9.114.111.255 scope global ens3
valid_lft forever preferred_lft forever
inet6 fe80::f816:3eff:fe90:6dd0/64 scope link
valid_lft forever preferred_lft forever
3: virbr0: mtu 1500 qdisc noqueue state DOWN group default qlen 1000
link/ether 52:54:00:35:c1:0d brd ff:ff:ff:ff:ff:ff
inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0
valid_lft forever preferred_lft forever
4: virbr0-nic: mtu 1500 qdisc fq_codel master virbr0 state DOWN group default qlen 1000
link/ether 52:54:00:35:c1:0d brd ff:ff:ff:ff:ff:ff

I’m not sure how those entries for ‘virbr0’ and ‘virbr0-nic’ got added (maybe docker added them?), but I wasn’t going to worry about that! So I ran the following commands, and they worked without a problem:

sudo ip addr add 169.254.169.254 dev virbr
sudo python md_server/mdserver/server.py mdserver.conf &

So now that the metadata server is running, time to try running ecompute on all the nodes. I use iTerm2, which has some sweet tools for splitting the terminal screen and running the same command in the different panes. I recorded a script of what happened:

I ran the command ecompute & on all the nodes to start the compute service in the background.

ed@etcd1:~/projects/etcd-compute(master)$ ecompute &
[1] 4661
ed@etcd1:~/projects/etcd-compute(master)$ 1556230694.3633301: PID: 4661 [None] {'uuid': '19a89e30-4bdd-49e7-b1a0-d4172bf7b289', 'placement': {'endpoint': 'http://etcd1:8080'}, 'etcd': {'host': 'etcd1'}, 'resize': False, 'bridge': 'br0'}
1556230694.364856: PID: 4661 [19a89e30-4bdd-49e7-b1a0-d4172bf7b289] {'VCPU': 4, 'DISK_GB': 77, 'MEMORY_MB': 7976}
1556230694.5012665: PID: 4661 [19a89e30-4bdd-49e7-b1a0-d4172bf7b289] Existing resource provider with gen 7 found with usages: VCPU: 0, MEMORY_MB: 0, DISK_GB: 0.

It’s interesting to see that because I had run this a few times earlier, etcd-compute recognized the UUID of the node, and noted that there was already an entry for that resource provider, with a generation of 7. If I were to stop that ecompute service and then re-start it, I would see the same as above, except this time the generation would be 8. That’s because when the service is killed, it changes the ‘reserved’ amount of its VCPU inventrory to the total amount, effectively preventing that node from being provisioned. That change increments the resource provider’s generation.

At about the 30-second mark, I tried to create a VM by running the command eschedule 'resources=VCPU:1,DISK_GB:1,MEMORY_MB:256' on the etcd3 node. That worked, and almost immediately you can see that it was scheduled to the etcd1 node, and the build process starts. However, there were many errors output, with the main one being error: failed to get domain ‘ff77fe58-e96a-498b-a3f5-a59030987238’. This is repeated several times, along with a bunch of network errors. So at this point I stopped the experiment.

There’s a lot I learned by going through all this, and I see many places where the etcd-compute project could be improved, starting with the documentation. I’d also like to get some less ethereal debugging output, so that when there are problems like I had spinning up a VM, they are recorded for later analysis. I’d also like to learn a lot more about the details of the networking required so that I can make sense of some of the networking errors.

The author of etcd-compute, Chris Dent, and I are hoping to have a mini-sprint on this project next week at the Open Infrastructure Summit in Denver, Colorado. If you will be there and want to join in the fun, drop me an email and I’ll let you know when we settle on a time and place.