I have been many things.

They all lead to who I am now.
Xandra Granade (she/they), storyteller.


Quantum Folks, We Need to Talk About January 6th

Recently, Congress held a hearing on expanding the National Quantum Initiative (NQI) Act. On its own, this doesn't sound all that notable — new research and new technologies means new funding and new budgets, after all. Given the level of hype in quantum computing over the past few years, expanding the NQI isn't all that surprising.

At the same time, go read that first sentence again. Does it jump out at you that this is a discussion playing out in the US Congress, out of all political venues in the United States? Maybe not, as much of the rhythm of politics-as-usual plays out with the banging of gavels in hearing after hearing. Sitting in 2023, however, we are very far from politics-as-usual, so maybe the word "Congress" should jump out at you.

Maybe it is significant that the future of quantum investments by the United States government is being made in the same halls that recently saw a manufactured debt crisis nearly shut down the entire government. Maybe it's more significant still that the hearing was held in the US House, the same august body currently debating things like how much to restrict abortion healthcare, whether or not to impeach cabinet members for spurious right-wing fever dreams, or something incomprehensible about gas stoves. Maybe it's more significant that the hearing on funding for quantum computing research and development was held in the same halls that, almost 30 months ago, were home to a vote to overturn democracy in the United States. Maybe it's more significant that the hearing was chaired by Representative Frank Lucas, who on January 6th, sided with insurrectionists by voting to invalidate the election in Arizona.

Maybe it's significant that as Rep. Lucas regurgitated empty rhetoric about China's funding for quantum computing (making sure to get the word "communist" in there for Fox News viewers at home), he stood in the same building where a two and a half years ago he tried to help a violent mob bring an end to free and fair elections in his own country.

It's tempting to treat quantum computing as some neutral thing, a technology without any moral values of its own. Perhaps it's even true, but that need not imply that researchers, developers, technical writers, project managers, or anyone else in the quantum research community or the quantum industry should act without morals. When someone like Rep. Frank Lucas stands up and tells us that he "cannot overstate the importance of maintaining the U.S. competitive advantage in quantum capabilities," we need to take him seriously, including the full and complete context of whom Rep. Lucas considers to be American, or even human.

Representative Lucas voted against COVID-19 relief, against providing a legal immigration path for DREAMers, against the right of same-sex or mixed-race couples to get married, against the freedom to vote (twice, even!), against healthcare for women and pregnant people more generally, against protections for Native Americans and LGBTQIA+ individuals who have been subjected to domestic violence, and against LGBTQIA+ rights more generally. Most critically of all, however, is indeed still his vote on January 6, 2021. As GovTrack.us notes:

Lucas was among the Republican legislators who participated in the attempted coup. On January 6, 2021 in the hours after the violent insurrection at the Capitol, Lucas voted to reject the state-certified election results of Arizona and/or Pennsylvania (states narrowly won by Democrats), which could have changed the outcome of the election. These legislators pumped the lies and preposterous legal arguments about the election that motivated the January 6, 2021 violent insurrection at the Capitol. The January 6, 2021 violent insurrection at the Capitol, led on the front lines by militant white supremacy groups, attempted to prevent President-elect Joe Biden from taking office by disrupting Congress’s count of electors.

Had Rep. Lucas and his effective collaborators scaling the walls outside the Capitol succeeded, we might well be dealing with a second Trump term despite the will of the voters. We can't know for certain what such a term would look like, but both in his time in office and in his social media posts since, Trump himself has given a pretty good clue that a second term would be horrific for queer people, people of color, people with disabilities, women and nonbinary people, and would be especially horrifying for anyone whose identity intersects multiple modes of oppression.

When Rep. Lucas tells us that "quantum computers have vast, untapped potential for both good and evil, which is why it’s so important that we stay ahead of our adversaries on these technologies," maybe we should consider whom Rep. Lucas considers his adversaries to be. At least for myself, I would posit that he made that much clear on January 6, 2021.

Have You Considered Egg?

I promise I'll post part II of my recent spicy quantum computing take soon, but in the meantime, I wanted to share a small bit of surreal flash fiction that I wrote recently. Thanks for reading, and I'll see you (approximately) next week!


I kind of just stared at it for a while. It was sitting there on the plate, right next to the bacon, the hashbrowns, the toast, the coffee, and the orange juice, just like it always did. I used to take it for granted that that's what you did with an egg: stare at it, sitting on the plate in its shell. Day after day, I would stop in at the café on my way to work, order the breakfast plate, and look at the egg as I carefully ate around it, not wanting to disturb it as it sat there, fragile but still unbroken. Day after day, I finished my meal and carefully slid the plate back towards the edge of the booth, put some cash down on the white hand-written bill, and left before any other customers arrived.

Except one day, that wasn't how it went at all.

I looked down at my watch; the flashing display read 8:53am, reminding me that I was currently missing my morning status meeting. I hadn't intended to be late, but once I saw far the detour took me out of my way, I made sure to text in and offer my apologies. Not that I got a response, of course, but I hoped it was enough to at least buy some time to eat. There's a few routines that you can't really mess with, after all.

The egg just sat there on my plate as always, oblivious to how late it was, and to the din that had picked up in the café. I was far from the only customer for a change, and the whole space seemed to fill with life, accompanied by the bustle and noise that followed life wherever it went. Tentatively, I scooped my fork into the hash, but it didn't taste the same, the simple flavors of oil, starch, and salt mixing in my senses with the heavy aroma of coffee and stale air. It even looked different --- the golden brown shreds on the tarnished fork caught not only the warm yellow fluorescent light overhead, but also the sunlight that peeked in through the layers of adhesive caked on the window, a palimpsest of the different advertisements that had hung there over the past half-century. Less brown, more golden and vibrant, reflecting the fervor around it.

I chewed as best as I could, choking down just how overwhelmed I was. Stolen snippets of conversation intruded into my mind, threatening to pilfer my own thoughts as well. The potatoes were just a touch shy of burnt, as always, but their soft crunch became sharp and unsettling as I ate, even the texture of my breakfast turning against me.

Frustrated, I paused, looking up at the other diners and wishing they would stop with their noise, their smell, their movement --- all taking up more and more space in my brain. This was, or at least was supposed to be, the one moment in my day when I could just be, not have to process so many different senses to simply exist.

The egg just sat there, as always, still oblivious.

In the booth across from me, there was a man in a gray suit, a bit badly fitting and poorly pressed. What social obligation was he performing with such perfunctory and superficial compliance? The egg on my plate didn't know, or if it did, its uniform white shell betrayed no sign of comprehension, any more than its matte texture seemed aware of the sunlight. Everything else on my plate had changed, but the egg --- my egg --- just sat there.

The man in the suit brought a fork to his mouth, covered in something brilliant and yellow, but what? Not hash, not any selection from a fruit cup, not any part of a pie that I could recognize. I thought through the whole contents of the menu, recalling the contents of each different dish. Every day I looked at each option before deciding, as always, on my one-egg breakfast. There was nothing for it, though, no other options on the menu. I looked over to confirm, hoping I wasn't too obvious as I noted the bacon, hash browns, and toast on his plate. No doubt about it, process of elimination told me that had to be his egg. Not white, but yellow. Not solid, but oozing down his fork. Not matte, but almost glimmering.

Was was his egg so different than mine? My egg sat there, not offering any answers at all. (Rude.) I picked my egg up, ignoring the bacon to inspect every point on its surface, studying the way it refused to even acknowledge the sunlight that danced across every other thing I could see.

I was deep in my reverie when the waitress came by to refill my coffee. The dark liquid, almost a thin tar in its viscosity, flowed into my mug on its own schedule, lapping slightly at the edges as it settled into the ceramic. The sound startled me, giving the egg --- *my *egg --- an opportunity to escape. It dropped onto my plate with a soft click, a crack spreading across its surface.

This had never happened before. Eggs weren't supposed to break. But there it was, caring more for what my hard and yellowed plate had to say about matters than for my own need to have something, anything, stay the same.

As I watched, the crack spread further. Something shiny eked its way out of the shell, dripping onto the plate. I bent down to look, and saw

My childhood room, my furniture, my toys, my young body sitting on my carpet, reading one of my books. No, not a book that I knew of, but something different. As a child, I'd been obsessed with what I would eventually understand to be civil engineering, always asking questions about who made roads, who drew the shape of those roads across the landscape, who strung bridges across chasms. The cover of this book was different, full of little blue balls flying about other balls, stylized atoms and molecules. I blinked, and the book was full of animals, then plants, then swords and suits of armor, then ancient columns holding up ancient roofs. I blinked again and saw

My office, my desk, my papers, my computer. Just like the book that wasn't my book, I saw oozing out of the egg a different office, with a typewriter, with a shelf full of brown and green hardcover books, with a dizzying array of green and black circuit boards. I saw

The city skyline from my balcony. Paris, Sydney, Tokyo, and Cairo all spread across my plate as I watched my egg spill its contents over glistening Mediterranean sand, over fields amber and verdant. I saw

My closet, full of clothes not my own. T-shirts that were just "t-shirts," and not "fitted." Practical but uninspiring shoes. A dozen copies of the same slacks. High-visibility vests. Tuxedos. Leather straps. Logo-emblazoned polos. A flapper dress. I saw

My world, but not my world. What my world could have been, what it could still be. The possibilities I had left sitting inside each and every intact egg, day after day. The worlds I had sent back to the kitchen each morning. The lives I had been afraid to let out, to reflect the sunlight, to mix with the smell of coffee, sweat, and hope, to abandon its own shape in favor of the plate or the toast or the hash.

I took a bite.

You Can't Program Quantum Computers With Python (Part I)

I'll warrant that "you can't program quantum computers with Python" is a spicy take on quantum computing, given the prevalence of Python-based toolchains for quantum computing — everything from QuTiP through to Qiskit offer to allow Python users a way to write and run quantum programs. At the same time it's not that hot a take, as the same qualities that prevent using Python to write quantum programs perhaps paradoxically also make Python a great language for quantum computing.

To reconcile those two seemingly opposite claims, we'll need to take a tour through two long-running dichotomies in classical software development: interpreted versus compiled languages, and programming versus metaprogramming. That will necessarily be a bit long for a single post, so let's dive in with a discussion of compiled versus interpreted languages.

Compiled and Interpreted Languages

Generally, when we talk about programming languages, folks tend to separate them into compiled languages like C, C++, Rust, and Go, or interpreted languages like Python and JavaScript. If you ask about how to classify languages like C# or Java that use a virtual machine to interpret intermediate-level bytecode at runtime, you'll get different answer depending on the biases and preferences of whomever you ask. Add just-in-time compilation into the mix, and you'll as often or not get demure mumbles followed by a sudden shift to talking about the weather.

Taxonomy is hard, and fitting all languages into one of two buckets is one of the hardest taxonomical debates we run into in classical software development. So let's approach the problem with overwhelming and embarrassing levels of hubris, and simply solve it: all programming languages are interpreted, whether or not they involve the use of a compiler. The interpreter may be built into your CPU, or might be a complex userspace application, but it exists nonetheless. A sequence of bytes is always meaningless on its own, without reference to a particular device or application interpreting them as instructions.

Rather, I'd posit that when people refer to the division between compiled and interpreted languages, a large part of the confusion stems from that there's actually two distinct technical evaluations (at least!) being juggled behind the scenes: how complex are the runtime requirements for a language, and what design-time safety does a language provide? Both of these questions are quite tied up in the task of programming quantum computers, to put it mildly.

Runtime Dependencies

When you write in a language like C, you have access to the _C Standard Library _, a set of functions and data types like fopen for opening files, or strncpy for copying n bytes from one part of memory to another. Except when you don't. That standard library needs to exist at runtime, leading to a variety of different implementations being made available, including glibc, muslc, Android's libbionic, and Microsoft's Universal C Runtime. In some cases, such as working with embedded microcontrollers or other low-power devices, those kinds of implementations might not make sense, such that you might not have a standard library available at all. As a result, the programs you write in C may have more or less runtime requirements, depending on what capabilities you assume and what kinds of devices you're trying to work with.

Traditionally, languages that we think of as being interpreted tend to have much heavier runtime requirements than compiled languages. JavaScript programs require either a browser or an engine like NodeJS to run (except when they don't), while Python carries the entire Python interpreter and its rather large standard library as dependencies.

Except when it doesn't. The MicroPython project provides an extremely lightweight implementation of the Python interpreter and an optional compiler, allowing Python to be used on small, low-power microcontrollers. The tradeoff is that, just as programming in C without the standard library is harder than programming with it, the version of Python recognized by MicroPython is a strict subset of the Python most people are used to working with, leading to a number of important differences.

Even MicroPython, though, is likely more than what can be reasonably run on the classical parts of a quantum device, especially when considering the extremely strict latency requirements imposed by coherence times. While there's no fixed set of runtime dependencies associated with a language, the fact that Python is very dynamic makes it difficult to fit its dependencies within the exacting requirements of quantum execution.

What do we mean by dynamic, though? Luckily, there's more to this post!

Design-Time Safety

In my previous post on types and typeclasses, I highlighted the role that types can play in checking the correctness of programs. To use an example from that post, consider passing an invalid input to a Python function that squares its argument:

>>> def square(x):
...     return x * x
...
>>> print(square("circle"))
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "<stdin>", line 2, in square
TypeError: can't multiply sequence by non-int of type 'str'

Generally, if you make a logic error of this form in Python, it gets caught when you run the program. On the other hand, if I try to write the same thing in Rust, I get an error when I compile the program, complete with suggestions as to how to fix it:

$ cat src/main.rs
fn square<T: Num + Copy>(x: T) -> T {
    x * x
}

fn main() {
    println!("{}", square("hello"));
}
$ cargo build
error[E0277]: the trait bound `&str: Num` is not satisfied
 --> src/main.rs:8:27
  |
8 |     println!("{}", square("hello"));
  |                    ------ ^^^^^^^ the trait `Num` is not implemented for `&str`
  |                    |
  |                    required by a bound introduced by this call
  |
  = help: the following other types implement trait `Num`:
            BigInt
            BigUint
            Complex<T>
            Ratio<T>
            Wrapping<T>
            f32
            f64
            i128
          and 11 others
note: required by a bound in `square`
 --> src/main.rs:3:14
  |
3 | fn square<T: Num + Copy>(x: T) -> T {
  |              ^^^ required by this bound in `square`

For more information about this error, try `rustc --explain E0277`.

That isn't to say that Rust is better, so much as that it provides different tradeoffs. While Python is far more flexible, and requires developers to specify a lot less information up front, that also means that there's less information available to validate programs as we write them instead of when we run them. The dichotomy isn't as strict as that, of course, thanks to design-time validators for Python like Pylint and Mypy, but it is generally true that languages like Python provide less design-time guarantees while languages like Rust provide more.

Much of that difference stems from the fact that in Python, types are dynamic, meaning that they are a runtime property of values. We can even check what type something is at runtime and make decisions accordingly:

>>> if random.randrange(2):
...     x = 42
... else:
...     x = "the answer"
...
>>> type(x)
<class 'str'>
>>> print("str" if isinstance(x, str) else "not str")
str

Here again, the dichotomy between dynamic and static type systems is a bit hard to pin down, given that languages like C# and Java support reflection as a way to make runtime decisions about types that would normally be static, and that polymorphism in C++ allows for some amount of dynamism subject to an inheritance bound. Even Rust has the dyn keyword for building a dynamic type out of a static typeclass bound. We can broadly say, though, that Python has a much more dynamic type system than most languages.

The practical effects of that dynamism are far-reaching, but what concerns us in this post is the impact on quantum computing: it's difficult to use types and other design-time programming concepts to make conclusions about what a block of Python code will do when we run it. That works well when computing time is cheaper than developer time, such that we can run and test code to ensure its validity, but works dramatically less well when computing time is expensive, as in the case of a quantum device.

Next time on...

In order to be useful for programming quantum computers, we want a language that is easy to use with minimal to no runtime dependencies, and that provides strong design-time guarantees as to correctness. Put together, exploring these two dichotomies tells us that we want something that looks more like what often gets called a compiled language, even if taking the compiled-vs-interpreted taxonomy literally isn't the most useful.

That then leaves the question as to what that "compiled" language should look like, if not Python. In the next post, I'll try to answer that by arguing that one very good alternative to using Python to program quantum computers is... to use Python.

The End of QCVV

For much of the history of quantum computing, researchers have been obsessed with a little subfield known as QCVV (an initialism only the government could love — but more on that later). A couple decades down the road, we can get some clues about quantum computing as an industry by taking a closer look at that obsession, where it came from, and why it's not quite as relevant now.

The obvious first question: what in the hell does "QCVV" even mean? The name dates back to US government grant agency announcements from the early 2010s, and expands to "quantum characterization, verification, and validation." That's a bit of a misnomer, though. By contrast with classical verification and validation, which is concerned mostly with formal proofs of correctness, QCVV is mostly concerned with benchmarks to assess the quality of a given quantum device.

In the early 2010s, such benchmarks were extremely important to grant agencies looking to make decisions about what research groups to fund, and to assess how much progress different groups were making. Back then, if you wanted to ask a question like "how good is this quantum device," there was more or less no real consensus in the field as to how to get an answer, let alone what a reasonable answer might look like.

In practice, this meant that each group would use different sets of metrics from everyone else, making comparisons across groups a nigh-impossibility. It's not that there weren't any known techniques or benchmarks, it's that no one could agree on which ones to use — whether to report process fidelity, diamond norms, tomographic reconstructions of superoperators, some combination of all of them, or even something entirely different.

Having been a relatively junior researcher at the time, late in my PhD program, my experience was that a lot of that confusion ultimately boiled down to a lack of consensus about what we were even trying to assess in the first place. From about 1985 through to the early 2000s, quantum computing was seen mostly as an academic curiousity; sitting where we are today, quantum computing is yesterday's buzzword. Between those two extremes was a field struggling to define itself as the prospect of commercial and military viability forced a primarily academic pursuit to grapple with questions of practicality.

QCVV, then, was to some extent an offshoot of that identity crisis; an attempt to reconcile what it meant for quantum computing research to "succeed." From the perspective of the US government, there was an stunningly clear answer. They wanted quantum computers to exist so that they could run stuff like Shor's algorithm on them, with the hope the breaking public-key cryptography schemes that power the Internet we know and love. That simple demand turns out to have one hell of an implication, feeding directly back into the history of QCVV. In particular, Shor's algorithm takes a lot of qubits to run, as well as a pretty long runtime. Running long programs on large devices takes incredibly small error rates if you want answers to be even remotely accurate.

Back of the envelope, consider running a program that takes 1,000 distinct instructions (commonly known as gates) on each of 1,000 distinct qubits. That is, insofar as computing scales go, embarassingly small — much smaller than what would be needed to run any practical quantum application. Even so, to get an average of about one error each time you run the program, that would require an error rate of no more than one in a million. Back in 2006, it was estimated that it would take a number of qubits that scales roughly one and a half times the length of some key in order to break that key using Shor's algorithm. Recommended key lengths at the time were at least 2,048 bits, so given what we knew in the early 2010s, you'd at best need about 3,000 qubits running a program that's about 27 billion gates long. Putting that altogether, you'd have needed error rates on the scale of one in a hundred trillion to be able to reliably use Shor's algorithm to break keys that were common at the time.

It's only relatively recently that quantum devices have been able to reliably error rates around 1%, so sitting in the 2010s, it was obvious that there were quite a few orders of magnitude worth of improvement needed to meet what the US goverment might consider "success." A trillion-fold improvement in error rates seems ridiculous on the face of it, leading some researchers to discount the possibility of quantum computing outright.

A decade earlier, in the early 2000s, however, the fault-tolerance threshold theorem guaranteed that if quantum computers were "good enough," you could use more and more qubits on your device to implement logical qubits that had as small an error rate as you want. That is, once you hit the threshold, requirements on error rates could be exchanged for requirements on qubit count. The threshold theorem gives a precise definition for what is "good enough" to meet that threshold, but in practice, that definition wound up being incredibly hard to measure in an actual physical device.

The goal of the original QCVV grant program was then to come up with techniques and benchmarks that could be used to answer the question as to whether a given physical device was closer or further to the fault-tolerance threshold than some other device. With such a precise question as motivation, the early 2010s saw an explosion of different techniques developed to try and connect experimental observations to mathematical definitions of the fault-tolerance threshold theorem in rigorous ways. Personally, much of my own involvement came in the form of trying to put experimental protocols, sometimes notorious for dodgy stats and hand-wavy appeals to mathematical definitions, on a more firm statistical basis.

It's difficult to overstate here just how much QCVV protocols and techniques became the focal point of intense arguments and infighting. After all, grant agencies were originally interested in QCVV to make decisions about which groups deserved funding and which groups should have their funding cut. For academic researchers, decisions about QCVV could easily make or break careers. Entire groups could lose their funding, even, putting their graduate students and postdocs into extremely precarious positions. It's no small wonder, then, that friendships and collaborations faced the same kind of existential pressure in a community nearly wholly without a healthy idea of work/life balance.

I promised you right in the overly sensationalized title of this post, though, that there was an "end to QCVV," so there is clearly more to the story than a few overly obsessed researchers duking it out on arXiv as to exactly what benchmarks should govern academic success. Perhaps ironically, the next chapter of quantum computing went pretty much the same as the one that led to the creation of QCVV as a subfield, starting with a question that served to crystalize discussions in the field.

By the late 2010s, the success of commercial demonstrations such as IBM's "Quantum Experience" created a shift away from questions about eventual fault-tolerance towards questions about what could be done with prototype quantum devices that could be used over the web in 2016. For the first time in quantum computing history, error rates of around 1% could be achived reliably enough to offer up as a web service, instead of requiring a small army of PhD students.

Whereas before, the US government and other grant agencies around the world were largely undecided on which kinds of quantum devices to invest in — superconducting qubits, ion trap qubits, silicon dot qubits, or even something like NV center devices — corporate research programs each tended to be more committed to their own platforms. Data that could help decide between different platforms became correspondingly less important as a result, pushing research discussions away from using QCVV to assess fault-tolerance. By 2017, corporate interest in quantum computing had advanced to the point that there was even a conference, Q2B 2017, focused tightly on the potential business impact of quantum computing.

In 2018, that shift intensified with the publication of a very popular paper arguing that more attention should be focused on what we could do with non–fault tolerant devices. The focus, it was argued, shouldn't just be on making practical quantum computers, but on finding tasks that quantum devices could definitely beat classical devices at. This goal went under the startlingly racist term of "quantum supremacy," and had the apparent advantage over the fault-tolerance goal of possibly being attainable in only a few years.

Benchmarks, techniques, and protocols for assessing which of a set of candidate platforms might one day achieve fault-tolerance were suddenly less relevant to a world in which decisions about platforms are much less volatile and in which the immediate goal is less about projected far-future devices than about what can be done today, or at least in the immediate future. The history of that shift is also inherently a history of how quantum computing has progressively been considered more applied throughout its history, as well as how the definition of "applied" has become increasingly more business-driven.


Thanks for reading the first post in my new newsletter! You can subscribe and help support me going forward at Forbidden Transitions. I promise that not all of my posts will be about quantum computing; I look forward to sharing my thoughts about tech more generally, tiny stories that I've written, and random stuff that's none of the above. In short, if you subscribe, you'll help me share more of me.