etcd is a database originally developed by CoreOS, and is most famously used as the database at the heart of Kubernetes. It is a distributed key-value store, which in itself is not all that remarkable. The thing about etcd that makes it so attractive is the ability to watch a key for changes.
Other key/value stores, such as Redis, have implemented a similar feature, and may work just as well for you. I’ve been using etcd for years, and it’s worked well for me, so I’ve never had a reason to try these other tools.
For most data stores, the only way to find if a particular value has changed is to poll. You issue a query for that value on a regular basis, and compare it to the last value returned to see if it has changed. This is terribly inefficient, especially with values that don’t change often. It’s also inexact with respect time, because your system’s reaction to a changed value depends on the interval between polls. Longer intervals, while less chatty, mean that more time will elapse between when the value changes, and when your application responds to that change.
Enter etcd. Instead of polling an etcd server for changes, you can watch for changes. This is essentially a pubsub system that requires almost no configuration to work. When a key is written to etcd, if there are any watchers for that key, a message is sent to them with the new value.
This is kind of dry in theory, so let’s look at a real-world application using this system: my photoviewer and photoserver applications. These applications allow me to display photographs on monitors that can be anywhere with an internet connection, and control each of those displays from a central server. They represent the ultimate convergence of my work as an artist and my love of programming.
Each display consists of a monitor (actually a TV, but all I want is an HDMI input) and a Raspberry Pi that runs the photoviewer application. Each display has a unique ID to identify it, and when a display starts up, it registers itself with the server. The server contains the settings for that display, such as the list of photos to display, and how often to change the displayed photo.
I have one such display in the kitchen of my home, and like to change the photos displayed on it from time to time. To do that, I go into my photoserver app and change the album for that display. Almost instantly the image on the display changes. How did that happen? The server is a virtual machine running in the Digital Ocean cloud, not local to the kitchen display.
The reason this works is that I’m also running an etcd server on another cloud instance. When I change any setting for a display, the photoserver app writes a new value for that display’s key. The key consists of the unique ID of the display plus the type of value being changed. For example, if I change the photos I want displayed for a display with the ID of 65febdde-3e8a-4c76-ab8f-d8a653e466c7, the server would write a value of a list of the names of those images to the key /551a441f-8aba-44b5-b70b-349af0be5b67:images.
That application uses the etcd3 library to watch my etcd server for changes to any key beginning with /<unique ID>. The watch() method is called with a callback method, and when a new key is written beginning with that display’s prefix, the value is sent to the callback.
The callback method sees that the full key ends with :images, so it passes the value (the list of image names) to the photo display method, which then retrieves the image and displays it. This happens in real time, without any polling of the server needed.
The original version of these apps used the traditional polling method, which seemed wasteful, considering that it was typically weeks between any changes being made. Switching to an etcd watch makes much more sense from a design perspective, and it greatly simplified the code.
Look for cases in your applications where a response is needed to a change in data. Using etcd as a mediator might be a good approach.
I’m deliberately taking a step back from my more political posts of the last few days in order to mentally process the events of the past weekend to get some perspective. Don’t worry, though, I’m sure I’ll be back to political commentary soon!
When people learn I’m a photographer, one of the first questions is invariably “So what kind of camera do you use?”. This is a perfectly understandable question, and also a completely irrelevant one.
When I first started out in photography, I had a friend who was a staff photographer for the city newspaper. He used a Canon F-1, a black-bodied professional camera. I noticed, though, that he used black electrical tape to cover the words ‘Canon’ and ‘F-1’ that were prominently written in white letters. I asked him about the tape, and he said that he got so sick of people seeing it and saying things like “Canon? Why don’t you use Nikon instead?” that he covered up the identifying marks.
A camera is a tool, nothing more. If you want to improve your photographs, buying an expensive new camera, or switching from Brand A to Brand B is not going to help.
I used to golf a lot, not that I was very good at it – I enjoyed being outside with a task to focus on, the company of other men, and, in retrospect, I enjoyed time away from my (now ex-) wife. Golf equipment companies are notorious for marketing expensive new clubs with the promise to hit the ball farther and straighter. But for everyone but the top professionals, it isn’t the club that’s holding you back; it’s your skill. You would be infinitely better off spending that money on lessons with a golf pro than on new clubs. Yet every year golfers spend their money on things that won’t help them improve.
This holds true in so many areas. New tools won’t make your woodworking better, and buying a vintage Stratocaster won’t help you play guitar better. So what will?
In almost all cases, the two things that will help is a good teacher, and lots of practice. The teacher can get you on the right path, and correct you when you stray off of it. You still have to put in the time, though, if you ever want to improve.
So when should you upgrade? When you’ve mastered that equipment, and its limitations are becoming an obvious hindrance to you. Or when different equipment offers functionality that your current equipment doesn’t (and you truly need those functions).
For photography, where you get the most bang for your buck is from better lenses, not camera bodies. Last year I dropped my camera and broke the mount for the zoom lens that was my main workhorse. When I looked for a replacement, I saw that there was a professional version of the lens that had better optics, a wider aperture, and tougher construction. I really considered it, but couldn’t justify spending an extra $1300 on it. So I ended up getting the same model as the one I broke, because as nice as the pro model lens was, I couldn’t see it improving my images enough to justify the cost. Maybe someday when money isn’t a concern…
The best camera in the world is the one you have with you.
Chase Jarvis
The quote above is from a book about iPhone photography. I found out about this book after I had made a similar statement about my realization that I could create some wonderful images with my iPhone, and one of the people I was speaking with mentioned the book. I don’t own the book, but I certainly agree with the sentiment. You can have all the fancy equipment in the world, but if it’s home in your closet when an opportunity presents itself, it doesn’t do you much good.
Which brings me to the answer to my choice of camera: the Olympus OM-D E-M5 II. I was in the market for a DSLR, and looked around at the different options. I read about a new style of mirrorless camera called the Micro 4/3 system, which was significantly smaller and lighter than the full-sized DSLRs. Since my primary mode of work is walking around looking for images, smaller and lighter were big selling points. I read the reviews, and chose Olympus because of its stabilization system, and the M5 as it was the middle choice that balanced features and price. I’ve been very happy with it, and the images I create with it.
I consider myself an artist, as do many others. But that title is thrown about quite a bit, and its meaning has been diluted. So let’s look at it.
In my mind, the essence of being an artist is creating something that not only is original, but captures or excites the interest of others. Often someone who paints or draws is automatically called an “artist”, and their product is called “art”. But that’s way too low a bar to set for that title. And for the record, I can’t paint or draw with any skill level whatsoever, and admire those who can.
I am a photographer. I recognized my attraction to photography as a child, and began taking it seriously in college. I attended a photography school for two years, and learned all about portraiture, lighting, studio arrangements, different films (yes, it was all film then!), color, tone, and print media. While I was able to master those techniques, I wouldn’t say I created art. Well, maybe with a few exceptions, such as:
What I found I enjoyed the most was simply walking around and looking. Things would strike me as visually interesting, and I would use my photographic technique to record them in a way that made interesting images. For example, my photo school was about 6 miles from my home, and I used my bike to get there. Shortly into my first semester, though, someone cut the chain I had locked it with and stole my bike. I now had to walk a mile to a bus stop, take the bus to downtown, and then walk another mile to the school. As I was walking I would look around, and things would occasionally catch my eye. Since I was carrying my camera, I began to record them. At the end of the semester we had to produce a portfolio, and so I created one called Sidewalks – all of the images were taken of sidewalks I walked on my way to/from school.
Not only was the portfolio well-received, it was noticeably different than the others. Most of the others were what I would call “traditional” photographic subjects: sunsets, landscapes, weathered barns, pets, etc., but mine were anything but traditional. So not only did the portfolio receive a good grade, it was chosen to be displayed around the campus – my first exhibition!
This is when I began to understand my creative process: instead of creating a scene by arranging items, or posing people, or any other conscious construction of the subject in front of the camera, I would explore the world as it existed, and find beauty in what others don’t see. I take special pride in images that are unremarkable in themselves, but from which I can create an interesting image. As an example:
Back in 2011 I worked at Rackspace, and the headquarters was in a refurbished shopping mall, nicknamed “The Castle”. Near the main entrance two different-colored sidewalks come together. You can see it in the center of this Google Maps view.
Over a thousand people walked past this point every day. I happened to walk past it on my morning break, looked down, and was struck by what I saw. I didn’t have a camera with me… or did I? In my pocket was my iPhone 4, so I took this photo with my phone. I’ll save my thoughts on photography gear for another day, though…
This is why I consider myself an artist: thousands walked past that spot that day, but only I saw this bit of transient beauty, and was able to capture it in a way that others could enjoy. Being able to take photographs, or paint pictures, or play piano, or sculpt clay – those are examples of crafts. But when you are able to use your craft to create something that moves other people – well, then I consider you an artist.
Last week I was fortunate enough to participate in the OpenStack Summit, which was held in beautiful Vancouver, British Columbia. This is the second summit held in Vancouver, and for good reason: the facilities are first-class, and the location is one of the most beautiful you will find.
From the signage around the Convention Centre and the Keynote, the theme of the summit was clear: Open Infrastructure. The OpenStack Foundation is broadening its focus to not only include the OpenStack code itself, but also a range of technologies to deploy, run, and support modern data centers.
The highlight (or maybe lowlight?) was the sponsored keynote by Mark Shuttleworth of Canonical. Generally speaking, companies which may be competitors in the marketplace but which work together to create OpenStack, put aside their differences and focus on their shared interests. Not Shuttleworth – he used the freedom that paying for that slot offered to badmouth both Red Hat and VMWare, claiming that Canonical can deliver OpenStack for a fraction of the cost of those two companies. While it’s likely true that OpenStack on Ubuntu would be less expensive than when running on a commercial distribution, the whole thing left a bad taste in everyone’s mouth. I know that this is typical Shuttleworth, but still… the spirit of coming together to collaborate took a big hit.
One thing I noticed was this slide that was presented showing how OpenStack supports “diverse architectures”.
Up until this summit, IBM had been a Platinum Member of the OpenStack Foundation, but greatly reduced its level of financial support recently. So it was a little curious that IBM’s architecture, POWER, was missing from this slide. Probably just an oversight, right?
After the keynotes, I went to the session by Belmiro Moreira of CERN, who spoke about CERN’s experience moving their large OpenStack deployment from Cells v1, to Cells v2 running in Pike. If you don’t know CERN, they run tens of thousands of servers in two data centers in order to support the research computations needed for the Large Hadron Collider. There is an inside joke among OpenStack developers when considering a change is whether it will help CERN or not – it’s sort of our performance test bed. Belmiro’s talk was very enlightening about just how these changes affected their performance. At first they had horrible results, but they were able to remedy them with config option changes as well as some horizontal scaling. In other words, it worked the way we had hoped it would: adjusting things that were designed to be adjusted, instead of having to hack around the code.
Another interesting session was the one discussing what would be needed to extract the Placement service from Nova into an independent project. The session was led by Chris Dent, who has done a lot of the prep work for the extraction. Nothing unexpected came from the session, which is a good thing; it showed that everyone on the Nova and Placement teams are in agreement on the path forward.
There was a session on Tuesday morning entitled “Revisiting Scalability and Applicability of OpenStack Placement“, by Yaniv Saar. There was some confusion on the subject, as the presenter used non-standard terminology, which was unfortunate; he used ‘placement’ to refer to the output of the Nova scheduler, not the Placement service itself. He had done extensive testing and statistical analysis to support his concept of a variation of the caching scheduler that only refreshed its cache after a given number of failures. The problem with this session was that all the work was done on the Mitaka code base, which pre-dates the creation of the Placement service. Most of the issues he “solved” have already been addressed by the Placement service, so his conclusions, while thoroughly backed up with numbers, dealt with a 3-year-old code base, and was irrelevant to the state of scheduling in Nova today.
After that was the API-SIG session (etherpad), where Gilles Dubreuil of Red Hat led the discussion about running a proof-of-concept for GraphQL. We discussed the various options for the best way to move forward with the PoC, with the principle that at the end (assuming success), we wanted a result that would be the most impressive to the OpenStack community, and possible persuade teams to adopt GraphQL. Gilles volunteered to lead this effort, and all of us in the API-SIG will be following closely to gauge the progress.
In the afternoon I went to the session on StarlingX, a new project from Wind River and Intel. I’m not up on all the history of this project, but it sure raised a lot of strong reactions among some long-time OpenStack people. As a result, I really don’t get the downside here; if you don’t want to support this code, well, just don’t support it. If there aren’t enough people who are interested, it will die a deserving death. If people do find some value there, then have at it.
Later in the afternoon I gave a talk along with Eric Fried on the state of the Placement service. Eric started by demonstrating that Placement isn’t just for Nova; it could be used to manage the groceries in your refrigerator! The examples were humorous, but did serve to show that the Placement service is agnostic about what sorts of resources you want to manage with it. I followed that with a recap of all the changes we had done in Queens and Rocky (so far), and what we are and will be working on in the future. I’ve gotten some positive feedback from people who attended the talk, so that makes me happy.
Wednesday was light on sessions for me, because I had to take advantage of being in the same time zone as Tony Breeds of Red Hat, with whom I’m collaborating on some internal IBM-Red Hat stuff. We had been having some issues, and the half-day time difference made it hard to get any momentum. So I spent a good deal of the day working on the internal project with Tony.
One session that was interesting was on API Debt Cleanup, which arose from an extended discussion on the openstack-dev mailing list. The advent of microversions has made adding to or changing an API smoother, but removing things that we no longer want to support is any easier. The consensus was that raising the minimum microversion that is supported should be signaled by a new major version. Some people on the dev side weren’t clear why they should keep supporting ancient, rusty parts of the code, but since there are SDKs that have been released that may use that code, we can’t ever assume that “no one uses this anymore”. Another part of the discussion was about making error codes/messages more consistent across projects. There were some proposed formats, but none that I feel provided any advantage over the existing API-SIG guideline on Error formats.
Thursday was the final day of the summit. I spent a lot of it working on the internal IBM-Red Hat project with Tony, with the rest of it focused on the Technical Committee sessions. I haven’t been as active in TC matters since they switched from a regular weekly meeting to the Office Hours format, but I do try to keep up with things via the mailing list. I don’t have any particular insights to share with you here, but it was good to see that the TC is getting better at communicating what’s going on the to public, and that they are reacting to criticisms, real or perceived, of how and what they do. I was also encouraged by their acknowledgement of the lack of geographic diversity in their membership, and their desire to address that.
Of course, it’s not possible to travel to Vancouver, go to a conference, and just leave. So on Thursday evening I was joined by my wife, and thanks to the long holiday weekend (at least in the US), we got to enjoy both the city of Vancouver, as well as the natural beauty of the surrounding area. Let me close with a few photos from the beautiful Vancouver area. If the OpenStack Foundation announced another summit there, I will be the first to sign up!
Bastrop, TX is a city about an hour or so away from Austin. It is also largely synonymous with the most destructive wildfire in Texas history, which began in September, 2011.
Last weekend I drove up to PyTexas, which was held at Texas A&M University, and my route led me through the Bastrop area. Nearly two years later the effect of the fire is still very striking: you’re driving along, passing through typical countryside areas, and then – black, burnt trees for miles. There is new vegetation springing up around these dead trees, but it just seemed to emphasize the destruction even more.
On my way home from the conference I pulled off the road as I passed through the area again, and took a few photos in this one small section of the devastation. I’ve posted them in an album on Amazon Photos. They don’t really do justice to the feeling you get driving through miles and miles of similar scenery, but they show the damage that was done, and how little things have recovered in two years.