Day of Shecurity 2021: Intro to Command Line Awesomeness!

Recently, I hugged a water buffalo calf. It was approximately as exciting as giving this talk is, which is to say: VERY.

First, the important stuff: here’s the repo with a big old markdown file of commands and how to use them. It also includes my talk slides, which duplicate the markdown (just with a prettier theme, in the way of these things).

Second: I went to the Lookout Security Bootcamp in 2017, one of my first forays into security things (after some WISP events in San Francisco and DEF CON in 2016). That’s where I conceived of the idea of this talk. There was a session where we used Trufflehog and other command line tools, and we concluded with a tiny CTF. Two of the three flags involved being able to parse files (with grep, among other things), and I was familiar with that from my ops work, so I won two of the three gift cards up for grabs. I used the money to buy, as I remember, cat litter, the Golang book, and a pair of sparkly Doc Martens I’d seen on a cool woman in the office that day. I still wear the hell out of the boots, including at DEF CON, and I still refer to them as my security boots.

I spent the rest of that session teaching the rad women around me the stuff I knew that let me win those challenges. This had two important effects on me. The first was that I thought, “Wait, it might be that I have something to offer security after all.” The second was that I wanted to do a session someday and teach these exact skills.

I went to Day of Shecurity in 2018 and 2019 too. It’s a fabulous event. At the last one, just a handful of months before we all went and hid in our houses for more than a year, I went to a great session on AWS security (a favorite subject of mine) by Emily Gladstone Cole. And I thought: oh, there it is. I’m ready. I told my friends that day that I wanted to come back to DoS as a presenter. And I pitched the session, it got accepted, and after a fairly dreamless year, one of mine came true.

So if you’re reading this: hello! These events really do change lives. The things you do really can bring you closer to what you want. And, as I like to say in lots of my talks, there is a place for you here if you want to be here. We need you. Keep trying.

I wrote about my own journey into security here. Feel free to ask questions, if you have them! I love talking about this, and I would like to help you get to where you want to go.

Bugcrowd LevelUp 0x07: How to Do Chrome Extension Code Reviews

This is the blog counterpart of my 22 August 2020 talk for Bugcrowd’s event, LevelUp 0x07.

A year ago right now, I was an SRE, and my only thoughts of Chrome extensions were that they were 1. something that existed, and 2. useful but also prone to news-making security problems. This year, though, I moved to an appsec engineer role, and suddenly they’re a pretty big part of my professional life. 

Since I became an engineer, I’ve been fascinated by the kinds of threats that live in places people are less prone to suspect – or even to consider. Once I learned that my current team sometimes review extensions, in addition to the third-party vendor testing that’s a more central part of our responsibilities, I realized I’d found another one of those somewhat neglected areas of security review. Challenge: accepted.

So that’s why I like Chrome extension reviews, the process of which I’ll lead you through in just a few paragraphs, I promise. But why should you care about this? You as an appsec enthusiast or engineer, you as a bug bounty hunter, you as someone else with an interest in the strange corners of the internet that vulnerabilities can hide in. 

Ok, but why do these reviews?

Extensions open up a really interesting attack surface. They’re part of the browser, so embedded in the client, which can let them sidestep some of the protections an unaltered browser might offer. Beyond that, people often have alert fatigue and don’t think that much of a warning like “this extension can read and change your data on every website you visit.” It should be alarming, but that warning applies to so many extensions that it’s easy for people to ignore. 

They also update themselves automatically, on a timetable determined by the browser using the update path that has to be part of any extension listed in the Chrome Web Store. So you have third-party code nestled right in where the business happens, AND said third-party code can change with no warning at a cadence that isn’t set by the end user. 

Last year, Google made some moves to bolster user security, making permission scopes around Gmail and Google Drive more granular, while requiring developers using these scopes to use the smallest access to information possible. Useful; not a panacea. Alas, the Chrome API permission scopes are still fairly broad.

Google has done some mass culls of extensions, and the team behind CRXcavator (which we’ll get to shortly) has uncovered a lot of, uh, interesting extension functionality too. 

However, the automatic updates still leave even polished products open to strange second and third acts. Bad actors buy popular extensions, which then give them a direct line to thousands and thousands of browsers, which are used by folks who generally aren’t extremely vigilant about monitoring their installed extensions. By which I mean: just people, in general. Most of us don’t do a monthly extensions check, which is understandable – but it does leave the door open to some interesting things. 

Including, happily, bug bounties. 

How to review Chrome extensions: tl;dr edition

When I review a Chrome extension, I start by building a story about the extension I’ve been assigned. I research the proposed use case, read the extension’s Chrome Web Store page and other online descriptions, and do a cursory review of the code and try to learn these things:

What does this extension say it wants to do?

What does it actually do? Is that answer different?

What permissions does it need to have to accomplish its stated mission? 

Do the permissions or code make anything possible outside of the extension’s stated mission?

This is where I figure out where to focus. Most extensions, like most people, are on the up and up. But – and sit down for this one – sometimes developers get sloppy when they’re writing extensions. I wouldn’t even call this the developers’ fault a lot of the time, because Chrome API permissions are broad, and most devs are strapped for time. It’s a recipe that makes for some messy stuff ending up in extensions. 

How to review Chrome extensions: in depth

Once I’ve got a sense of what the extension is supposed to do and what it actually seems to be doing, I work through the code in three steps. 

First, I paste the extension’s Web Store ID (long, all lowercase letters, part of the URL of its Web Store page) into CRXcavator. CRXcavator gives you an aggregated risk score for the extension, based on several metrics. The most relevant to our needs are the content security policy and Chrome API permissions. (Others include externally hosted Javascript libraries and its Chrome Web Store Score.) Here’s CRXcavator’s score for one of its associated extensions.

We’re going to focus more on other parts of the manifest for the purposes of this post, but content security policy can also make a lot of good, weird stuff possible that perhaps shouldn’t be. Like I said, this setting can literally override the settings of individual websites, so it’s worth spending some time looking at the CSP of an extension as part of your broader exploration.

CRXcavator gives me an early indicator of where risk might live. You can also read through the code on their site, or you can do what I do and use an extension like CRX Extractor/Downloader to get it locally and explore in your text editor. 

Once I have the code in front of me, I read the manifest.json file.

A text editor showing a sample manifest.json file

This is where I spend a lot of my time, because it tells me a lot about what’s possible. These files can get long and intricate: there are 56 different fields that can be included. However, most of the extensions I see don’t use a ton of those. Only three are required, and only four are marked in the docs as recommended. That excludes content_scripts, though, so pretty much all extensions have to go a bit beyond the minimum in order to be able to actually do much. 

In manifest.json, I look most closely at the permissions listed, both the Chrome API permissions included and the URLs cited. From there, I read through the different Javascript files being used and what they’re going to be allowed to do. 

Finally, I read the code. I’m looking for weird uses of permissions (chrome.identity and chrome.cookies are fun, but I look up all uses of any permission methods used), any connections to third-party servers, and anything else that seems overly ambitious. 

Assessing the risks

Permissions are the sketch of what can happen; the code shows what will actually happen with them. Because of that, I spend a fair amount of time on those permitted Chrome API operations. The permissions don’t encompass all the possible risk in an extension, but they cover a lot. 

Here’s what I’m looking for across all of the extension code.

Is it sending data? If so, to where? I want to see innocuous and clearly described changes to the appearance and contents of the DOM. I do not want to see data being collected or sent. If I do see data being collected or sent, I have found the new primary focus of my time for the duration of my testing. There are legitimate reasons to do this, of course, but they need to be safely done and well justified. 

Is it storing anything unsafely in your browser? I don’t want to see plaintext secrets stored in the browser, not in session storage and especially not in local storage. This is something I touch on early because of my job’s context; it may be less important to you, but it’s still worth looking into.

Do the scopes of the Chrome API permissions let the extension snoop on everything we’re doing? We don’t like that much either. 

Is the extension snooping on everything you’re doing and then sending it somewhere else? Oh dear. 

Again, there’s opt-in functionality that can justify all of this. But it does need to be justified (from my point of view as a reviewer of risk), and then it needs to be done safely.

This second part is where the bug bounty hunter can have a lot of fun. 

Anatomy of an extension

Let’s start with manifest.json, since it’s where I spend most of my time and it most concisely shows a lot of the weird possibilities in a given extension. The key parts I look at are the general permissions of the extension and then content_scripts and their associated permissions 

Seeking bounties? manifest.json is more likely to be a hint at what will lie in the rest of the Javascript, which is where you’ll find what you’re actually looking for. The really interesting stuff will be in the code, of course. Some extensions have a single fairly simple JS file; others can have files upon files. The official size limit for a CRX file, which is the zip-formatted file format for a Chrome extension, is 2 GB. That’s a lot of room for function and also just utter mayhem. 

Most extensions also have an HTML component, often for creating a customized menu for the extension. If you see popup.html, it’s likely this. These are subject to the same vulnerabilities as any webpage, so it’s worth testing for XSS and anything else you’d try to find in a front-end. 

Permissions and their risks

Permissions for an extension come in two forms: URLs (specific ones or patterns) and Chrome API permissions. These cover all kinds of possible interactions: alerts, accessing and altering browser storage, accessing bookmarks and tabs, among many others. 

There’s a safe use for every permission, but there are also permissions that, if included, are a great place to start when reviewing an extension to make sure it’s safe for the user. Problems arise when the dev needs to invoke a permission or a manifest.json field, like web_accessible_resources, that is necessary for some perfectly ordinary extension function, like access packaged resources to use in the context of a web page. Unfortunately, that same permission can be used to execute remote scripts. Like most anything involving Javascript, there are risks that come with using the tools at hand. 

Google has a great guide where they list the permissions they think are most dangerous, with surprisingly high specificity. Removing the escape key’s ability to exit full-screen? Spying on the user with their own camera or microphone? Nightmares! But all are possible with the correct, terrible combination of host and API permissions.

Let’s look at a few of these permissions, now that I’ve spent some time broadly demonizing them.

URLs and pattern matching

For an extension to access a site, its URL must either be specified or match a pattern. Conveniently for devs, there are patterns that make matching easy. Conveniently for bug bounty hunters, this means that lots of devs leave the doors more open than is ideal because they’re trying to futureproof their extension (or, less commonly, because it’s something that has any business doing something on every site you visit). manifest.json can include and exclude URLs based on these patterns, but including is much more common. 

The patterns and matches I’m most wary of are *://*/* and <all_urls>, with a close second anything that matches a common host (, for instance) with asterisks before or after. *://*/* and <all_urls>, however, match the entire internet, so long as the address being visited is a permitted protocol (http, https, ftp, or file). These permissions give an extension carte blanche to work on any website you visit. Your email, your bank, your employer’s web portal when everyone’s working at home… I find this permission hard to justify. It isn’t that no extensions should use it, but it should be relatively few. Google themselves declares wide-open wildcard patterns to be the highest-risk type of permission, with <all_urls> the next most dangerous. 

URL permissions can be at the top level permissions in an extension’s manifest.json or a match connected to a particular content_script. 

On top of allowing an extension free access to all of the user’s browsing, including those wide-open permissions also allows cross-origin requests with none of the usual barriers your browser would present. It also allows Javascript injection on any site the user visits. Fun!

That said, one reason people do this (for non-nefarious reasons) is precisely to avoid CORS errors. However, I promise this can be done effectively and more securely – it just requires a little more work to narrow down exactly what URL or pattern needs to be put into place. Beware asterisks; especially beware asterisks with only a host or no host at all. On the plus side, it opens up a lot of interesting territory for the bug bounty hunter. Silver lining, I guess? 


webRequest lets the extension change and add HTTP headers. It also lets it add listeners and change behavior upon receiving the first byte of a response, at the initiation of a redirect, when a request is completed, and anywhere else in the lifecycle of a web request. This can, of course, be used totally legitimately. However, it’s also been used to intercept and forward user traffic, so if it shows up in the manifest, it’s worth looking at every one of its methods that’s used in the code. Another check on this behavior is that this permission also has to be paired with an appropriate host permission to be able to do anything. 

This permission is likely to be in flux in the future, but for now it’s still here and still weird enough that even the EFF has weighed in on its present and future


The cookies permission offers methods around getting, setting, and removing cookies. This can result in a lot of weird possibilities, particularly when paired with gleaning data and sending it elsewhere. Fortunately, the solution for this is the same as for XSS: set that HttpOnly flag on your cookie so it isn’t accessible via the Javascript of a content_script. However, if people faithfully set that flag, my job might not need to exist, so it’s worth mentioning. This also requires appropriate host permissions to be able to do anything. 

The code, more generally

The manifest.json file can point to a few different uses for Javascript files. The background section of the file, which is optional, includes code that is loaded when needed, including when a content script sends a message, when certain events happen, or when the extension is first installed or updated. In contract, content_scripts (which, as I mentioned, have their own permissions) run in the context of webpages. 

When I read these Javascript files, the things I watch for are shaped by the particular concerns of my team at my very large company. My primary concerns, however, are pretty universal: how the extension might be accessing, storing, and sending information. I watch for any input text being saved or otherwise manipulated. I find out what is being collected, if the collection and handling of that information supports the story I’ve put together about the extension through my research, and, if not, where the two diverge. After that, I check for http requests. If I find them, I look at what they’re doing. What’s being stored and sent? How is it being sent, and where is it going? Is it something the user of the extension will have opted into via an account, or is this being done more quietly? 

I also watch for tells like the use of innerHtml – basically, if it’s something you’d watch for when reviewing webapp code, it’s worth trying on the menus and other popups that are part of an extension. 

Good news for bug bounty hunters

Companies’ official Chrome extensions can create weird and surprising vulnerabilities for their customers. Think about the permissions involved in an extension that alters text typed into a field. It can read what you type, store it, and send it to be judged by an algorithm across the internet, with the goal of furnishing, say, spelling corrections in a way that reads as nearly instant. We type a lot of sensitive things into text fields – passwords, credit card numbers, questions to our insurance companies – and I personally don’t trust an extension to have fine enough control to not at least process the text written in fields other than the ones it’s supposed to be targeting. 

Let me tell you: even well-funded companies have too-loose permissions and other weird stuff happening in their extensions. It’s the nature of this very particular medium. Let me also tell you: there are bounties for reporting extension issues, both from Google and sometimes from the companies that make them.  

Because extensions are both everywhere and, in my experience, undervalued as a bug bounty target, I think the world of extensions in particular would benefit from your attention. Beyond what the permissions and code can yield the first time you look at an extension, the particular opportunity of how extensions are updated offers a lot of possibilities for repeated investigation. The releases for extensions, unlike those for websites, are numbered. You know if something changed, enough that Chrome has a page for it. Chrome also has its own program, in addition to extensions being in scope for many companies with existing bug bounty programs. People get paid for extension findings.

Extensions have depths that can contain a lot of, let us say, interesting things. I hope they’re inviting enough for you to dive in and start finding vulnerabilities that make this a safer ecosystem. 

Want to learn more?

If the dazzling galaxy of links throughout this post wasn’t enough, I’d suggest two more. This tutorial is how I created my first extension to install locally to better understand the process. This 2016 DEF CON talk isn’t completely applicable to the capabilities of extension today, but it’s an excellent guide to how good and weird it has been possible to get with extensions.

You Can Put WHAT in DNS TXT records?! A blog post for !!con West 2020

Why use a phone book metaphor to explain DNS when you can get even more dated and weird? (This will make more sense when I link the video, but in the meantime, enjoy.)

It Is Known that DNS contains multitudes. Beyond its ten official record types are myriad RFC-described and wildly off-brand uses of them, and DNS TXT records contain the most possibility of weird and creative use. I quote Corey Quinn:

“I mean for God’s sake, my favorite database is Route 53, so I have other problems that I have to work through.”

(From this interview.)

He’s also described it as the only database that meets its SLAs 100 percent of the time. (Route 53 is AWS’s DNS product, encompassing TXT records and many other things, if you have not had the pleasure.)

What is this mystery that is woven through the internet? Let me introduce you to (or reacquaint you with, if you’ve met) the DNS TXT record.

DNS and its ten beautiful children

There are ten kinds of DNS records, each of which will include a reference to a specific domain or subdomain, which usually exists to enable access to that domain’s server or otherwise help with business connected to that domain (email settings, for instance).

The one you might’ve seen or made the most is an A record, or an address mapping record. This is the one that matches URL to IP address – IPv4, in this case. AAAA does the same for IPv6. There are CNAMES, or canonical name records, which alias a hostname to another hostname, often used for things like $, when the marketing project site is being hosted on Heroku or somewhere other than your company’s primary business. You can read about them all here. This post, however, and its accompanying talk (link to come) is about my favorite of them all: TXT records.

TXT records are used for important, fairly official things, but it’s only by agreed-upon practice. While you’ll see very consistent formatting in them for things like SPF, DKIM, DMARC, or domain ownership verification (often in the form of a long random string value for a key that likely starts with _g), the truth is that you can put almost anything in there. My favorite off-brand but still computer-related one I heard about was a large university that put lat/long information in each server’s TXT records, for the sake of finding it more efficiently on a sprawling campus.

For the records’ contents, there are a few restrictions:

  • You cannot exceed 255 characters per string
  • You can include multiple strings in a single TXT record, but they must be enclosed in straight quotes and separated by commas. These can be concatenated into necessarily longer records, like DKIM with longer keys or very elaborate SPF records
  • All characters must be from the 128-character printable ASCII set (no emoji allowed, and no curly quotes or apostrophes either)
  • At least on AWS, you can’t exceed 512 bytes per record, whether it’s a single string or several
  • They are not returned in the order they were added (which made the Emily Dickinson poem I added as three records come out a little funny in my terminal; it still kind of worked, though)

I cribbed that together from a mix of (provider-accented) experimentation and anecdotal information from others who have pondered this stuff. The official docs are often a little hand-wavy on this level of detail (appropriately, I’d say). RFC 1035, for instance, states: “The semantics of the text depends on the domain where it is found.” For its RDATA packet, it offers this:

3.3.14. TXT RDATA format

    /                   TXT-DATA                    /


TXT-DATA        One or more <character-string>s.

I mean, fair. (Data goes in data place; I cannot argue with this.) Future compatibility and planning for unexpected needs to come are a part of every RFC I’ve dug into. Meanwhile, RFC 1464 more directly portends some of the weirdness that’s possible, while also explaining the most common format of TXT files I’ve seen:   IN   TXT   "printer=lpr5"    IN   TXT   "favorite drink=orange juice"

   The general syntax is:

        <owner> <class> <ttl> TXT "<attribute name>=<attribute value>"

I am accustomed, when dealing with infrastructure tags, to having the key-value format be required, either through a web UI that has two associated fields to complete or through a CLI tool that is very prepared to tell you when you’re doing it wrong.

I have not found this to be the case with TXT records. Whether you’re in a web UI, a CLI, or Terraform, you can just put anything – no keys or values required. Like many standards, it’s actually an optional format that’s just become normal. But you can do what you want, really.

And there are peculiarities. When doing my own DNS TXT poking for this presentation and post, I worked with Dreamhost and AWS, and they acted very differently. AWS wanted only one TXT record per domain and subdomain (so you could have one on and another on, while Dreamhost let me make dozens – but it made DNS propagation really sluggish, sometimes getting records I’d deleted an hour ago, even after clearing the cache. (Dreamhost, meanwhile, has a hardcoded four-minute TTL for its DNS records, which you have to talk to an administrator to change, specially, on your account. It’s always interesting in DNS.) In short, the system is not prepared for that much creativity.

Too bad for the system, though. :)

DNS, ARPANET, hostnames, and the internet as we know it™️

DNS did not always exist in its current scalable, decentralized state, of course. Prior to around 1984, the connection of hostname to IP was done in a file called hosts.txt, which was maintained by the Stanford Research Institute for the ARPANET membership. The oldest example I found online is from 1974, and you can see other examples here and chart the growth of the protointernet. It went from a physical directory to more machine-readable formats, telling you and/or your computer how to reach the host identified as, say, EGLIN or HAWAII-ALOHA. These were static, updated and replaced as needed, and distributed weeklyish.

hosts.txt began its saunter out of our lives when DNS was described in 1983 and implemented in 1984, allowing the internet to scale more gracefully and its users to avoid the risk of stale files. Instead, independent queries began flying around a decentralized infrastructure, with local caches, recursive resolvers, root servers that pointed to top-level domain servers, and nameservers that kept the up-to-date IP addresses and other details for the domains in their care. (You can find a less breezy, more detailed description of this technology you used today here.)

The joys and sorrows of successful adoption

The great and terrible thing about DNS is that so many things rely on it. So if DNS is having a bad day (a much-used DNS server is down, for instance), it can interrupt a lot of processes.

That means, though, that it can also be used to do all sorts of interesting stuff. For instance, a DNS amplification attack involves sending out a ton of DNS queries from lots of sources and spoofing the source address in the packets so they all appear to come from one place, so the responses all go to one targeted server, possibly taking it down. 

TXT records figure into some of this weirdness. Let’s get into some of the interesting backflips folks have done with DNS and its most flexible of record types.

Shenanigans, various

DNS tunneling

This is, so far as I’ve been able to tell, the OG of DNS weirdness. It’s been going on for about 20 years and was first officially described at Black Hat in 2004 by Dan Kaminsky (who stays busy finding weird shit in DNS; if you like this stuff, you’ll enjoy digging into his work).

There are a few different ways to do this, but the central part is always smuggling some sort of information in a DNS query packet that isn’t supposed to be there. 

DNS packets are often not monitored in the same way as regular web traffic (but my Google ads, in the wake of researching this piece, will tell you that there are plenty of companies out there who’d love to help you with that). The permissiveness of DNS query packet movement makes a great vector for exfiltrating data or getting malware into places it would otherwise be hard to reach.

Data is sometimes smuggled via nonexistent subdomains in the URLs the packet seems to be querying for (, for instance), but if your packet is designed to return, say, a nice chunk of TXT records? You can really stuff some information or code in there. DNS queries: they smuggle stuff AND evade lots of firewalls. Awesome!

Thwarting internet censorship

The more common DNS association with censorship is avoiding government DNS poisoning by manually setting a DNS server to This isn’t a perfect solution and is getting less useful as more sophisticated technology is put to monitoring and controlling tech we all rely on. However, there’s another way, like David Leadbeater’s 2008 project, which put truncated Wikipedia articles in TXT records. They aren’t live anymore, but there are so many possible uses for this! Mayhem, genuine helpfulness… why not both?


Ben Cox, a British programmer, found DNS resolvers that were open to the internet and used TXT records to cache a blog post on servers all around the world. He used 250-character base64 strings of about 187 bytes each to accomplish this, and he worked out that the caches would be viable for at least a day.

I love all of this stuff, but this is probably my favorite off-brand TXT use. I honestly screamed in my apartment when I saw his animated capture of the sending and reassembly of the blog post cache proof of concept. 

Contributing to the shenanigans corpus

So naturally, I wanted in on this. One does not spend weeks reading about DNS TXT record peculiarities without wanting to play too. And naturally, as a creator of occasionally inspirational zines, I wanted to leave a little trove of glitter and encouragement in an unlikely part of the internet that was designed for no such thing.

Pick a number between 1 and 50. (It was going to be 0 and 49, but Dreamhost will allow 00 as a subdomain but not 0. Go figure!) Use that number as the subdomain of And look up the TXT record. For example:

dig txt

will yield 14400 IN TXT "Tilt your head (or the thing) and look at it 90 or 180 degrees off true."

Do a dig txt on only, and you’ll see them all, if DNS doesn’t choke on this. My own results have varied. If you like spoilers, or not having to run dig 50 times to see the whole of one thing you’re curious about, you can also see the entire list here.

In the meantime, now you know a little bit more about a thread of the internet that you’ve been close to and benefited from for some time. And next time: always dig the TXT record too. Start with if you want a strong beginning.

How an SRE became an Application Security Engineer (and you can too)

Mural of three geometric hands with rainbow strings coming from them, across a wall topped with ivy

I’ve had an ambition to become a security engineer for some time. I realized I found security really interesting early on in my career as an engineer in late 2015; it was a nice complement to infrastructure and networking, the other interests I quickly picked up. However, security roles are notoriously hard to fill but also notoriously hard to get. I assumed it would be something I’d pursue once I felt more secure in my skills as a site reliability engineer – meaning I might have waited forever.

Instead, early last fall, a friend reached out to me about an opening on her team. They wanted someone with an SRE background and security aspirations. Was I interested in pursuing it as a job, she asked, or was it just a professionally adjacent hobby?

I had to sit and think about that.

For about five seconds.

I typed back a quick I VOLUNTEER AS TRIBUTE extremely casual and professional confirmation that I was indeed interested in security as a career, and then the process began in earnest. 

But before we get to where I am now, let’s back up to how I got here – and what I brought to the interview that (in my opinion, at least) got me the job.

The earnest scribbler

I’ve taken to describing the last couple of months as the Slumdog Millionaire portion of my career: it’s the part where everything I’ve done suddenly falls into place in a new and rather wonderful way.

I began my career as a writer and editor. It’s what I want to college for; rather shockingly, it’s how I supported myself for more than a decade. I’ve been a proofreader, a freelance writer, a mediocre book reviewer, a contract content editor, a lead editor in charge of style guides and training groups of other contractors, a mediocre marketing writer, and a content strategist. Toward the end of that time, I got a certificate in user-centered design from the University of Washington. I loved the work and loved content strategy, but it had become increasingly apparent over the previous couple of years that most writing that paid was meant to sell something, which wasn’t what I wanted to spend forty-plus hours a week doing. I could get by, but I knew I needed to change something, so I began paying closer attention to what I was actually really good at and what wasn’t a total slog within my jobs.

I learned to understand things and to be able to teach those same things to other people in language they could understand and refer to over and over. I became a prolific and effective documentation writer. I learned to navigate people and teams, most especially to get buy-in, because as a writer, sometimes half the job is convincing people that writing is worth paying for.

Toward the end of that period of struggle, I hung out with some software engineers for the first time. I discovered – and I say this with infinite gentleness – that they weren’t any smarter than I am. Between that and the rising resources for people looking to get into programming without a computer science degree, I decided it was now or never, and I needed to make the leap. If all else failed, I could go back to being a frustrated writer – just one who at least tried to do something else. 

Engineering part one: consulting

I landed in San Francisco in 2015 to go to code school, and I stayed when I got my first job. I worked at a consultancy for three years, doing a lot of work in healthcare and govtech. I was thrown in the deep end as an infrastructure engineer and essentially a sysadmin, which was incredibly difficult and also incredibly formative to the kind of engineer I’d become. I also spent six months as a full-stack engineer.

Code school taught me how to code, to connect different layers of the stack to each other, and how to begin researching complex problems. At my first job, I added networking and AWS, Terraform, Bash, my love of writing CLI tools, and automation. I learned more about navigating bureaucratic nightmares, how to run teams effectively, and how to facilitate good meetings and retros. I also learned that I don’t like writing Javascript very much. Between that and realizing how much I enjoyed working with AWS, I decided my next role would be as a site reliability engineer or something like it. 

Interlude one: in which security becomes very interesting

I began to think I might have something to offer the security world at the predecessor to Day of Shecurity in 2017. I was interested enough in the subject to sign up, but thinking I might have relevant skills was a different matter. Generally, I explained my interest in security as a complement to my regular job. “Ops is about building things,” I’d say. “Security tells me how you can break them, so I can learn to build them better.”

One session that day was a CTF of sorts, with three flags to find in the vulnerable test network we were exploring. Navigating to them required using command line tools, the ability to grep, and feeling comfortable with flags and documentation. I won two of the three and won two Amazon gift cards. I bought the official Golang book, cat litter, and the sparkly boots I still wear at least a couple of times a week, which I saw on a cool woman in the Lookout office that day.  I’ve thought of them as my security boots ever since and have stomped through DEF CON, DoS, and job interviews in them. I use them as a reminder of that feeling: oh, wait, I might actually have something to offer in this field.

Engineering part two: SRE life

I spent 13 months as an SRE, a job I was thrilled to get (and still thrilled I landed). I got to dig deeper into the skills I’d gotten at the last job, as well as spending long days with Elasticsearch, becoming friends with Ansible, and learning another flavor of Linux. My company outsourced their security work to an outside firm, and I made a point of studying what they did: reading the emails they sent back to bug bounty seekers, responding to the small incidents that popped up here and there, and carrying out mitigations of issues in the infrastructure reviews they made for us.

I also grabbed the security-adjacent opportunities that came up, doing a lot of work on our AWS IAM policies and co-creating the company educational material on phishing avoidance. I learned about secret storage and rotation, artful construction of security groups for our servers, and how to best communicate policies like password manager use to people with lots of different technical backgrounds. 

Engineering part three: present day

Early last fall, a friend reached out saying that her team at Salesforce wanted someone with an SRE background and an interest in learning security. We’d gone to the same coding school, though not at the same time. We actually met at a WISP event. She placed first in the handcuff escape competition; I placed second. We stayed in touch. She invited me over to a certain very tall San Francisco building to talk to her and her manager about the role, and so the process began.

My team does software reviews, which can involve black box pen testing (where we don’t see the code), code reviews, consulting on responsible data use for possible software options and the expansion of existing tools we use, and being a resource for other teams. We’re a friendlier face of security, which is the only kind of security I’m really interested in being a part of. We also work directly with outside software companies to improve their security practices if they don’t pass our initial review, so I’ll get the chance to help other engineers be better, which is one of my favorite things.

As of this writing, I still spend most of my days on training: learning to write and read Apex, doing secure code reviews of increasing complexity, and figuring out who does what in a security org with more than a thousand people. Coming into a very large company requires a ton of building context, and fortunately, I get the space to figure it all out. 

Skills, revisited

So now you have an idea of what I learned and brought to the process of applying to this job. I recognized fairly quickly before that ops and security have a lot of things in common – that is, beyond a reputation for being risk-averse and more than a little curmudgeonly.

There are skills that are essential for both, including:

  • Networking, AWS, and port hygiene
  • Coding, especially scripting; Bash and Python are great choices
  • Command line abilities
  • Looking up running processes and changing their state
  • Reading and manipulating logs

The skills that less explicitly in demand but that I’ve found to be really useful include:

  • Communication, both written and verbal
  • Documentation creation and maintenance
  • Teaching
  • A UX-centered approach

Let me explain what I mean by that last one. As I said before, I have some education in UX principles in practices, and I’ve done official UX exercises as part of jobs. I’m still able to, if needed. The part of it I use most often, though, is what I’ve come to think of as a UX approach to the everyday.

What I mean by that is the ability to come into a situation with someone and assume that you don’t understand their motivations, previous actions, or context, and then to work deliberately to build those by asking questions. The key part is remembering that, even if someone is doing something you don’t think makes sense, they most likely have reasons for it, and you can only discover those by asking them.

This is at the center of how I approach all of my work, and it seems to be distinctive – when I left my last job, a senior engineer pulled me aside and gave me the nicest compliment about how he’d learned from me by watching me do exactly that approach for the year we’d worked together. He told me how different it was from how he worked and that he’d learned from me. It was a very nice sendoff. 

Interlude two: my accidental security education

Here’s something I only realized afterward, which I alluded to earlier: I’ve done a LOT of security learning since becoming an engineer. I just didn’t fully realize what I’d been doing because I thought I was just having fun.

So I did none of these things with interview preparation in mind. The closest I came was thinking, “Oh, I see how this might be useful for the kinds of jobs I might want later, but I’m definitely not pursuing that job right now.” Well! Maybe you can be more deliberate and aware than I was. 

These are the things I did that ended up being really helpful, when it came to prepare officially for a security interview, over the last four years: 

  • Going to DEF CON four times
  • Going to Day of Shecurity three times
  • Being a beta student for a friend’s security education startup for an eight-part course all about writing secure code
  • Attempting CTFs (though I’m still not super proficient at this yet)
  • Talking security with my ops coworkers, who all have opinions and stories
  • Volunteering for AWS IAM work whenever it came up as a task
  • Classes at the Bradfield School of Computer Science in computer architecture and networking (try to get a company to pay for this)

Every one of these things gave me something that either helped me feel more adept while interviewing or something I mentioned specifically when discussing things and answering questions. Four years is a lot of time to pursue something casually, especially since I usually went to an event every month or two. 

I’ve also benefited a lot from different industry newsletters, especially these:

Many of these are ops-centric, but all of them have provided something as I was working toward shifting jobs. Very few issues and problems exist in only a single discipline, and these digests have been really useful for seeing the regular intersections between things I knew and things I wanted to know more about.

Interview preparation, done deliberately

I officially applied for the job a month or so after that fateful informational coffee. I applied while I was out of town for three weeks being a maid of honor in my best friend’s wedding, meaning I didn’t get to do much until I was home and had slept for a couple of days.

Once my brain worked again, I made a wishlist of everything I wanted to be able to talk confidently about. Then I prioritized it. Then I began working through everything I could. I touched on about half of it. 

I studied for about a week and a half, a couple hours at a time. I focused on three main things:

  • Exercism, primarily in Python
  • The OWASP top ten from 2013 and 2017
  • Blog posts that crossed my current discipline and the one I aspired to

The Exercism work was because I never feel like I code as much as I’d like in my jobs, and I feel more confident in technical settings when I feel more fluent in code. The OWASP reading was a mix of official resources, their cheat sheets, and other people’s writing about them; reading different perspectives is part of how I wrap my head around things like this. And the blog posts were for broader context and also to get more conversant about the intersection between my existing skills and the role I was aspiring to. The Capital One breach was really useful for this, because it happened due to misconfigured AWS IAM permissions.

This is the list I made, ordered by priority. The ones in italics are the ones I addressed to my satisfaction.

  • Python Exercism (80%)
  • Dash of Bash Exercism (20%)
  • Practice using ops-related Python libraries (request, others???)
  • Get a handle on ten core automation-related bash commands
  • Bash loops practice
  • DNS, record types
  • Hack this Site or something similar for pen testing
  • Read up on Linux privilege escalation
  • OWASP reading
  • DNS tunneling
  • Read over notes from the Day of Shecurity 2019 threat modeling workshop
  • Katie Murphy’s blog
  • flAWS s3 thing
  • Jenkins security issues
  • CircleCI breach
  • Common CI security issues
  • Common AWS security issues
  • Hacker 101
  • Something something appsec resource 
  • Infrastructure principles blog posts
  • Security exploits for DNS TXT records

And here, with dates and links, is exactly what I did to study in the week and a half leading up to the interview.

28 October

Cracking Websites with Cross Site Scripting – Computerphile

Hacking Websites with SQL Injection – Computerphile

2.5 easy Exercism Python problems

30 October

Two easy Exercism Python problems

Security Incident on 8/31/2019 – Details and FAQs 

Three Hack This Site exercises

31 October

DNS Tunneling: how DNS can be (ab)used by malicious actors

Two easy Exercism problems

3 November

How NOT to Store Passwords! – Computerphile

Socket coding in Python with a friend

4 November

A Technical Analysis of the Capital One Hack

How GCHQ Classifies Computer Security – Computerphile

Basic Linux Privilege Escalation

Two easy Exercism problems

5 November

1.5 Exercisms

The Book of Secret Knowledge

Read about Scapy for Python

6 November

Read OWASP stuff and made notes, including the 2017 writeup

Bash For Loop Examples

Every Linux Geek Needs To Know Sed and Awk. Here’s Why…

7 November

An easy Exercism

Recited OWASP stuff to Sean

Sean is my boyfriend. One of the kindest things he does for me is that he lets me explain technical things to him until I’m able to explain them to non-engineers again. I do this pretty regularly, because it’s really important to me to be able to teach people without a lengthy engineering background, and I did it during interview preparation because I know how easy it is to obscure a lack of understanding with jargon, and I didn’t want to do that. Having someone who lets me do this is perhaps the other thing I didn’t realize would be as helpful as it has been; we started doing it because he wanted to know what I did at work, and I realized that it helped make me a better communicator and engineer. May you all have someone as patient as he is to help you translate engineerspeak to human language on the regular. 

So that was how I spent my preparation time. Next: the interview. 

A series of conversations, across from the tallest tower

For reasons I’m sure you can guess, I can’t give you the most specific play-by-play of the interview process. However, I got permission to give you a higher-level view of it that I hope will still be illuminating.

My interview was a bit bespoke, because they were more accustomed to hiring people who had already been pen testers or security researchers. Because of that, in addition to proving that I knew a few things about spotting insecure code and thinking through vulnerabilities, I also talked to their DevOps architect about ops things, including opinions on infrastructure as code and the creation and socialization of development environments. (We also found that we take a similarly dim view of senior engineers who bully junior engineers.) I talked about securing a server when several different types of users would need to reach it in different ways. And yes, I talked some about the OWASP top ten. 

My bar for a “good interview” is whether the things we talked about or did were directly relevant to the needs and responsibilities of the job, and that was absolutely the case here. The only whiteboarding I did was when I volunteered to do so, drawing out network diagrams when I realized my hand gestures were not up to conveying the complexity of what we were discussing. Everything else felt collaborative, casual, and built to help me explain the things I knew about without feeling all the uncertainty that badly designed interviews can evoke. 

Getting ready for your own security path

My goal in writing this post (based on a talk I did for Secure Diversity on 28 January 2020, which I will link to when the video is up) was to give the extremely specific information about how I got the job that I’ve always been thirsty for but often found lacking in “how I got here” talks for these kinds of roles. I hope I managed that; when I proposed the talk, I was very grateful to my past self for keeping such fastidious notes. 
However, I also want to leave you with some more general ideas of how to shape your current career to more effectively get to the security role I presume you’re seeking. 

Find a couple security-essential skills you already know something about and dive deeply into them. I have a lot to say about IAM stuff, in AWS and Jenkins and general principle of least privilege stuff, so that’s been something I’ve really focused on when trying to convey my skills to other people. Find what you’re doing that already applies to the role you want, and get conversational. Keep up on news stories relevant to those skills. This part shouldn’t be that hard, because these skills should be interesting to you. If they aren’t, choose different skills to focus on.

While you’re doing this learning, make sure the people in your professional life know what you’re doing. This can be your manager, but it can also be online communities, coworkers you keep in touch with as you all move companies, and anyone else you can speak computer or security with. Don’t labor in obscurity; share links, mention things you’ve learned, and throw bait out to find other people interested in the same things.

Build that community further by going to meetups and workshops. When I think about living outside the Bay Area (which of course I do, because it’s a beloved hobby of just about everyone who lives around here), one of the things that would be hardest to give up is all the free education that’s available almost every night of the week. Day of Shecurity, Secure Diversity, OWASP in SF and the south bay, NCC meetups, and there are so many more. Go to the thing, learn the thing, and read about the thing after.

Finally, remember that security needs you. Like all of tech, security is better when there are a lot of different kinds of people working out how to make things and fix things. Please hang in there and keep trying.

And good luck. <3

Writing for Work: Team Structure for Great Good

In the past, my posts for various jobs have generally been the result of some curiosity, in the vein of what’s the deal with PATH, the program that formats man pages is HOW old, and what does good password hygiene look like. (Yes, I blogged in my previous life as a content strategist; no, I’m not digging those up right now. Have at it if you want.) My first post for my new job at Nylas (well, newish – I’ve been here almost eight months now) is the result of some longer study, which makes sense. One of the reasons I sought a new job was project longevity and continuity. Working as a consultant exposed me to so many new ideas and situations, but I wanted to see what I’d learn once I got to stay put for a while.

I won’t say every day has been easy, but I will say that I’m really pleased with what I’ve been doing. I get to point at a new program and essentially say “I WANT IT,” and then it’s mine. (It’s helpful when GIMME intersects with your manager’s need to delegate.) Oh, you want Elasticsearch, Breanne? HERE YOU GO. No regrets! I’ve dug deep into the weirdness of AWS IAM, moved a ton of stuff into Terraform and set our style guidelines for what good Terraform looks like, made my first EU AWS resources, learned some Ansible, got to apply Python to systems management with Boto, weirded out with Bash, and gotten better acquainted with monitoring. I’m chuffed.

A thing I gave the team in return is structure. In my work post, for obvious reasons, I didn’t go deeply into what I had previously learned that was useful here. However… what I’d previously learned was incredibly useful here. I became fatigued from new situation after new situation, but it was incredibly gratifying to get to use those same skills to make a comfortable, regular set of meetings and other expectations that I actually got to benefit from in the long term. It felt good to start good sprint planning, standups, and retros for clients, but it felt amazing to make them with myself and my ongoing teammates as the beneficiaries of this stuff. And do you know, I was pretty good at it after going through the process several times before. Fortunately, I worked with people who trusted me – and, perhaps even more important, made it clear that this was not exclusively my job and would not be solely my responsibility as time marched on. It is not extremely surprising, I think, that after setting all of this up and spreading responsibility across the team… I’m backing off the glue work for a bit, because the structure is in place for me to computer more exclusively. I’m very excited.

It also pleases me that this is all essentially another kind of automation. I love automating infra stuff – fully automated deploys and regular checks on systems and updating spreadsheets and all of the boring stuff that computers can do better than we can. What I wanted here was essentially automation in interactions, a regular cadence of events that freed us from having to reinvent structure unnecessarily, so we all had set expectations and were free to focus on the things we actually care about, that do require human interaction and innovation. I’m happy to say that it worked.

I wrote this post in part because I was proud of what I did and wanted to say so publicly. However, I also wrote it because I know the problems I had – meetings without set structure, unclear expectations between teams, irregular schedules that cause more confusion than they cure – are very common, and I hope this post helps even one other person set themselves free from another agendaless meeting, to remember that there’s something better on the other side. I’ll see you there, timer in hand, politely reminding everyone that lunch is soon, and we’d best wrap it up.

/etc/services Is Made of Ports (and People!)

Hospital switchboard, 1970s
I’ve been using a switchboard/phone extension metaphor to explain ports to my non-engineer friends who have been asking about my talk progress. Feels fitting to start the blog version of it with this switchboard professional.

You can view the video version of this talk here.

I learned about /etc/services in my first year of engineering. I was working with another engineer to figure out why a Jenkins master wasn’t connecting to a worker. Peering through the logs and netstat output, my coworker spied that a service was already listening on port 8080. “That’s Jenkins,” he said.

“But how did you know that?”

“Oh, /etc/services,” he replied. “It has all the service-port pairings for stuff like this.”

Jenkins is not, in fact, in /etc/services, but http-alt is listed at port 8080. The more immediately relevant answer was probably “through experience, because I’ve seen this before, o junior engineer,” but his broader answer got me curious. I spent some time that afternoon scrolling through the 13,000-plus-line file, interested in the ports but especially curious about the signatures attached to so many of them: commented lines with names, email addresses, and sometimes dates, attribution for a need as yet unknown to me.

I got on with the business of learning my job, but /etc/services stayed in my head as a mystery of one of my favorite kinds: everyone was using it, but many well-qualified engineers of my acquaintance had only partial information about it. They knew what used it, or they at least knew that it existed, but not where it came from. The names in particular seemed to surprise folks, when I asked colleagues for their knowledge about this as I was doing this research.

This post, a longer counterpart to my !!con West talk on the same subject, digs into a process and a file that was once commonplace knowledge for a certain kind of back-end and network engineer and has fallen out of more regular use and interaction.  I’ll take you through some familiar services, faces old and new, correspondence with contributors, and how you – yes, you – can make your mark in the /etc/services file.

What is it, where does it live

In *nix systems, including Mac OS, /etc/services lives exactly where you think it does. Windows also has a version of this file, which lives at C:\Windows\System32\drivers\etc\services. Even if you’ve never opened it, it’s been there, providing port name and number information as you go about your life.

The file is set up like this: name, port/protocol, aliases, and then usually a separate line for any comments, which is where you’ll often find names, email addresses, and sometimes dates. Like so:

ssh      22/udp  # SSH Remote Login Protocol

ssh      22/tcp  # SSH Remote Login Protocol

#                   Tatu Ylonen <>

The most common protocols are UDP and TCP, as those were the only ones you could reserve until a few years ago. However, as of an August 2011 update to RFC 6335 (more on that later), you can now snag a port to use with SCTP and/or DCCP as well. This RFC update added more protocols, and it also initiated a change from the old practice of assigning a port for both UDP and TCP for a service to only allocating the port for the protocol requested, and just reserving it for the others, though they’ll only be used if other port availability dwindles significantly.

Incidentally, the presence of a service in /etc/services does not mean the service is running on your computer. The file is a list of possibilities, not attendance on your machine (which is why your computer is probably not currently on fire).

Going through the first 500-odd lines of the file will show you some familiar friends. ssh is assigned port 22. However, ssh also has an author: Tatu Ylonen. His bio includes a lot of pretty typical information for someone who’s listed this far up in this file: he designed the protocol, but he has also authored several RFCs, plus the IETF standards on ssh.

Jon Postel is another common author here, with 23 entries. His representation in this file just hints at the depth of his contributions – he was the editor of the Request for Comment document series, he created SMTP (Simple Mail Transfer Protocol), and he ran IANA until he died in 1998.  A high /etc/services count is more a side effect of the enormity of his work rather than an accomplishment unto itself.

It’s cool to see this grand, ongoing repository of common service information, with bonus attribution. However, that first time I scrolled (and scrolled, and scrolled) through the entirety of /etc/services, what stayed with me were how many services and names I wasn’t familiar with – all this other work, separate of my path in tech thus far, with contact information and a little indicator of what that person was up to in, say, August 2006.

For instance: what’s chipper on port 17219? (It’s a research rabbit hole that took me about 25 minutes and across Google translate, the Wayback Machine, LinkedIn, Wikipedia, a 2004 paper from The European Money and Finance Forum, AMONG OTHER RESOURCES. Cough.) chipper, by Ronald Jimmink, is one of two competing e-purse schemes that once existed in the Netherlands; the longer-lasting competitor, Chipknip, concluded business in 2015. The allure of these cards, over a more traditional debit card, was that the value was stored in the chip, so merchants could conduct transactions without connectivity for their card readers. This was a common race across Europe, in the time before the standardization of the euro and banking protocols, and chipper is an artifact of the Netherlands’s own efforts to find an easier way to pay for things in a time before wifi was largely assumed.

Then there’s octopus on port 10008 (a port which apparently also earned some notoriety for being used for a worm once upon a time). Octopus is a a professional Multi-Program Transport Stream (MPTS) software multiplexer, and you can learn more about it, including diagrams, here.

There are, of course more than 49,000 others; if you have some time to kill, I recommend scrolling through and researching one whose service name, author, or clever port number sparks your imagination. Better still, run some of the service names by the longer-tenured engineers in your life for a time capsule opening they won’t expect.

Port numbers and types

Ports are divided into three ranges, splitting up the full range of 0-65535 (the range created by 16-bit numbers).

  • 0-1023 are system ports (also called well-known ports or privileged ports)
  • 1024-49151 are user ports (or registered ports)
  • And 49152-65535 are private ports (or dynamic ports)

Any services run on the system ports must be run by the root user, not a less-privileged user. The idea behind this (per W3) is that you’re less likely to get a spoofed server process on a typically trusted port with this restriction in place.

Ok, but what actually uses this file? Why is it still there?

Most commonly, five C library routines use this file. They are, per (YES) the services man page:

  • getservent(), which reads the next entry from the services database (see services(5)) and returns a servent structure containing the broken-out fields from the entry.
  • getservbyname(), which returns a servent structure for the entry from the database that matches the service name using protocol proto
  • getservbyport(), which returns a servent structure for the entry from the database that matches the port port (given in network byte order) using protocol proto
  • setservent(), which opens a connection to the database, and sets the next entry to the first entry
  • endservent()

The overlapping use of these routines makes service name available by port number and vice versa. Thus, these two commands are equivalent:

  • telnet localhost 25
  • telnet localhost smtp

And it’s because of information pulled from /etc/services.

The use  you’ve most likely encountered is netstat, if you give it flags to show service names. The names it shows are taken directly from /etc/services (meaning that you can futz with netstat’s output, if you have a little time.)

In short: /etc/services used to match service to port to give some order and identity to things, and it’s used to tell developers when a certain port is off limits so that confusion isn’t introduced. Human readable, machine usable.

Enough about ports; let’s talk about the people

First, let’s talk about the method. Shortly after getting the acceptance for my !!con talk, I went through the entire /etc/services file, looking for people to write to. I scanned for a few things:

  • Email addresses with domains that looked personal and thus might still exist
  • Interesting service names
  • Email addresses from employers whose employees tended to have long tenures
  • Anything that sparked my interest

I have a lot of interest, and so I ended up with a list of 288 people to try to contact. The latest date in my local copy of /etc/services is in 2006, so I figured I’d be lucky to get responses from three people. And while more than half of the emails certainly bounced (and the varieties of bounce messages and protocols had enough detail to support their own fairly involved blog post), I got a number of replies to my questions about how their work came to involve requesting a port assignment, how it was that they knew what to do, and how the port assignment experience went for them.

I will say that the process revealed an interesting difference between how I’d write to folks as a writer and researcher (my old career) vs. how one writes questions as an engineer. As a writer working on a story that would eventually involve talking to people about a subject, I would research independently and only later approach the people involved; my questions would start a few steps back from what I already knew to be true from my research. This allows room for people to feel like experts, to provide color and detail, and to offer nuance that doesn’t come out if the person asking questions charges in declaring what they know and asking smaller, more closed questions.

This is… not the case in computer science, when questions are typically prefaced with a roundup of all 47 things you’ve googled, attempted, and wondered about, in the interest of expediency. This meant that my very second-person questions, in the vein of “how did you do this” and “what was the nature of your process,” sometimes were taken as some email rando not being able to, how you say, navigate the internet in a most basic way.

The more you know.

Happily, I got more than three responses, and people were incredibly generous in sharing their experiences, details of their work, and sometimes relevant messages from their astonishingly deep email archives.

bb, port 1984: Sean MacGuire

The first response I got, delightfully, was for the service named after my initials: bb, on port 1984. More delightfully, this turned out to be the port reserved for software called Big Brother, “the first Web-based Systems and Network Monitor.” Its author, Sean MacGuire, figured out the process for reserving a port after researching IANA’s role in it. At the time (January 1999), it was approved in 12 days. He described it as “totally painless.” Fun fact: Sean also registered the 65th registered domain in Canada, which required faxing proof of the company and making a phone call to the registrar.

The thing I started to learn with Sean’s response was how this was, at one point, pretty ordinary. Most web-based services restrict themselves to ports 80 and 443 now, in large part because a lot of enterprise security products clamp down on access by closing less-commonly used ports, so reserving a port for your new service isn’t always a necessary step now.

In which I am written to by one of the chief architects of the internet as we know it

The next response I got was from a little further back in computer science history. For context, I’ll tell you how I went about contacting people: back in December, after my talk was accepted for !!con West, I went through the /etc/services file on my computer and selected people to contact. I picked people whose email address domains looked like they might still be around, who worked for companies where people tend to have long tenures, or who were contacts tied to interesting-sounding services.

I did this across about ten days, which meant that, by the time I got to the end, I couldn’t have recounted to you the first folks I selected, particularly as I’d chosen 288 people to try to reach in all. Incidentally, about half of those bounced – not nearly as many as I expected.

This is all to say that I was a little startled to read this response:

> How did you come to be the person in charge of reserving the port

I designed the HTTP protocol

Which reminded me that I had indeed selected this entry as one worth diving into, when I was first getting a handle on this research:

http          80/udp www www-http # World Wide Web HTTP

http          80/tcp www www-http # World Wide Web HTTP

#                       Tim Berners-Lee <>

He was, I am pleased to say, impeccably polite in his brief response, and he recommended his book, Weaving the Web, which is such a wonderful look at finding compromise between competing standards and design decisions across dozens of institutions, countries, and strong-willed people. As he said, more information on his place within this work and that file can be found there, and if you’re at all curious, I so recommend it.

More contributors

I also liked that some people had fun with this or considered it, as Christian Treczoks  of Digivote, port 3223, put it, a “real “YEAH!” moment.” Barney Wolff, of LUPA, worked a few layers of meaning into his assignment of port 1212: “I picked 1212 because the telephone info number was <area>-555-1212. And LUPA (an acronym for Look Up Phone Access) was a pun on my last name. I don’t know if my bosses at ATT or anyone at IANA ever noticed that.”

Christian Catchpole claimed port 1185, appropriately named catchpole. He requested a low port number in the interest of claiming something memorable. He explained: “The original project back in 2002 involved a compact wire protocol for streaming objects, data structures and commands etc. While the original software written is no longer in the picture, the current purpose of the port number involves the same object streaming principal.  I am currently using the port for async communications for my autonomous marine robotics project.” The original uses of many ports have shifted into computer science history, but Christian’s projects live on.

Alan Clifford (mice, port 5022) claimed his space for a personal project; approval took 21 days. (He, like several people I contacted, keeps a deep and immaculate email archive.) Mark Valence (keysrvr at 19283 and keyshadow at 19315) recounted his involvement thusly: “I was writing the code, and part of that process is choosing a port to use.” He ended up in /etc/services around 1990 or 1991, when his team was adding TCP/IP as an option for their network service a year or so prior, enabling Macs, PCs, and various Unix systems to communicate with each other.

Ulrich Kortenkamp (port 3770, cindycollab) was one of two developers of Cinderella, and he claimed a port in /etc/services to codify their use of a private port for data exchange. He added: “And I am still proud to be in that file :)”

Greg Hudson’s contributions date to his time as a staff engineer at MIT, when he became a contributor to and then a maintainer of the school’s Zephyr IM protocol (zephyr-hm in the file) and then similarly with Subversion, the open-source version control system now distributed by Apache. His name is connected to ports 2102-2104 for Zephyr and port 3690 for Subversion.

Jeroen Massar has his name connected to four ports:

He noted that AURORA also has an SCTP allocation too, which is still fairly rare, despite that protocol being available since 2011. He remarked, “[This] is actually likely the ‘cool’ thing about having ‘your own port number’: there is only 65536 of them, and my name is on 4 of them ;)”

I asked people how they knew what to do; some were basically like :shrug: “The RFC?” But others explained their context at the time. Mostly, folks seemed to have general industry awareness of this process and file just because of the work they did. (“I was the ‘Network Guy’ in the company,” said Christian Treczoks.)  Some knew the RFC or knew to look for it; others had been involved with the IETF and were around for the formation of these standards. My anecdotal impression was that it was, at that point, just part of the work. If you were on a project that was likely to need a port, it was known how you’d go about getting it.

Who controls this file? Where does the official version come from?

Like so many things in the big world of the internet, /etc/services and its contents are controlled by IANA. The official version varies; what’s on IANA’s official and most up-to-date version deviates some from what you might find locally. The version of /etc/services on my Mac, as I’ve mentioned, is about 13 years out of date. However, people are still claiming ports, and you can see the most current port assignments at IANA’s online version.

On most Unixes, the version of /etc/services you see is a snapshot of the file taken from IANA’s official version at the time that version of the OS was released. When installing new services, often you’ll want to tweak your local copy of /etc/services to reflect the new service, if it’s not already there, even if only as a reminder.

However, updates vary between OSes; the version included with the Mac OS is not the most current, and how updates are added and communicated can vary widely. Debian, for instance, furnishes /etc/services as part of the netbase package, which includes “the necessary infrastructure for basic TCP/IP based networking.” If the included version of /etc/services got out of date, one could file a bug to get it updated.

To learn how /etc/services managed and how to contribute, the golden standard is:

RFC 6335

No, seriously. The Procedures for the Management of the Service Name and Transport Protocol Port Number Registry has most of what you would need to figure out how to request your own port. While the process is less common now, it’s still regimented and robustly maintained. There’s a whole page just about statistics for port request and assignment operations. While this isn’t as commonly used as it once was, it’s still carefully governed.

RFC 7605, Recommendations on Using Assigned Transport Port Numbers, includes guidance on when to request a port). 7605 and 6335 are concatenated together as BCP 165, though 6335 is still referred to frequently and is the most commonly sought and cited resource.

How can you get your piece of /etc/services glory?

There’s real estate left to claim; as of this writing, more than 400 ports are still available. Others have notes that the service claims are due to be removed, with timestamps from a few years ago, and just haven’t yet.

If you have a service in need of a port, I am pleased to tell you that there is a handy form to submit your port assignment request. You’ll need a service name, transport protocol, and your contact information, as you might guess. You can also provide comments and some other information to bolster your claim. The folks managing this are pretty efficient, so if your request is valid, you could have your own port in a couple of weeks, and then you could be hiding inside of your computer many years from now, waiting to startle nosy nerds like me.

Despite the shift to leaning more heavily on ports 80 and 443, people are absolutely still claiming ports for their services. While the last date in my computer’s /etc/services file is from about a decade ago, the master IANA list already has a number of dates from this year.

So: make your service, complete the form, and wait approximately two weeks. Then cat /etc/services | grep $YourName, and your immortality is assured (until IANA does an audit, anyway).  

And if you do, please let me know. Enabling people to do great, weird, and interesting things is one of my favorite uses of time, and it’d make my day. Because there are no computers without people (or not yet, anyway), and a piece of that 16-bit range is waiting to give your work one more piece of legitimacy and identity.

Thanks to everyone who wrote back to me for being so generous with their experiences, to Christine Spang for the details about Debian /etc/services updates, to the patient overseers of RFC 6335, and the authors of all the RFCs and other standards that have managed to keep this weird world running.

New Talk: Of Tracked Changes and Diffs at StarCon 2018

I write to you from beautiful and astonishingly cold Waterloo, Ontario, Canada, where the people are kind and the excitement is palpable. (Really – everyone’s excited about what they’re doing and about sharing it. It’s great.) I did a new talk during the morning session about what I learned from my life in editorial that applied to dealing with code reviews as an engineer. Slides to come; for now, I have a written-out version for you over at

In the meantime, enjoy a picture of me in a tuque provided to me by a kind-hearted organizer so I’d be slightly less likely to die on the trek between my Airbnb and the university. I like it here.

Writing for Work: Using Custom Containers for Deploys

One of my favorite ways to learn things (to complement, you know, doing them) is writing about them. In fact, I have a talking coming up in, oh, less than two weeks that talks in some detail about just this. It is, in my opinion, one of the advantages to hiring me, to be both self-serving and accurate about it.

I wanted to better understand our recent adoption of using containers for testing and deploys, so I wrote about it on the Truss blog. Behold: Easier Deploys with CircleCI and Custom Docker Containers.

I wonder how many of Unsplash‘s image results for “container” I can use before we turn to another new, shiny step into the future. Here’s another one I considered using, but I decided to stay more in our vein of infrastructure, getting things done, and the bay. boxes of blueberries, raspberries, blackberries, and some orange ones too

Unsplash: it’s useful.

Til next week, when I return with a link to the blog version of my next talk, wherein I let you in on the transferrable skills I learned in writing workshops but apply now in code reviews.

Writing for Work: on Passwords and Better Practices

Broken glass pieces sticking out of the stop of a stucco wallI wrote for work! I love writing for work. This time, I got to write the first entry in our security series and talk about sufficiently complex passwords, how to store them, and how to manage them across time and breaches. (Bonus: my predilection for taking travel pictures of forbidding fences and danger signs wound up being really helpful in our quest to avoid trite security-themed clip art.)

This was an exciting one to write. We’re not a security company (in fact, we are infrastructuralists, in case you had not heard), but good, solid practices, including security in all its forms, do touch our work pretty often. (See: the conversations I have with people who work with my client periodically about how we cannot use AMIs made by outsiders, we cannot use Docker containers not created internally, and we need a really-no-seriously good reason to peer our VPC to theirs.)

However, like lots of people in tech or even tech adjacent, the people we love who aren’t in tech and aren’t so steeped in this stuff ask us for guidance in how to be safer online from a technological standpoint. My password post (tl;dr: get a password manager, change all those reused passwords, repent and sin no more) is the first; we’ll also be covering how vital software updates are, how treacherous email can be, and why ad blockers are good for more than just telling popups to stfu. We’re writing this series to have a good, reliable resource for us and for others called to do Loved One Tech Support so that even those not glued to their laptop for the duration of their professional lives can adopt good practices and not be totally hosed the next time some major company’s store of usernames and passwords gets breached.

Advice for Women Thinking of Going to DEF CON (Yes, Really)

Via Flickr

I decided to go to DEF CON last year on a lark. I went to a WISP lockpicking event last June with a friend and coworker, who informed me that she was considering going, and oh, hey, did I want to come with? I’d heard of it before, but not in detail and not quite in the right context to make it sound like something I’d want to attempt. This time landed differently, though. (I blame having recently learned to use a handcuff shim.) I spent the evening after the event looking up flights to Vegas, hotels, and other research that suggested this was not a financially responsible move and not really very good timing either. Still, it stuck in my head, and I had to pull myself away from Kayak and its ilk and make myself go to bed later than was ideal.*

The next day, I mentioned it to a different coworker, who has a good balance of fun and financial responsibility. Since we were less than a month out from the event and I had neither transportation nor a place to stay, I expected her to talk me down and suggest that I try next year.

Instead, she told me she was going, and did I want to share a room? And that was phase one done – resolve achieved, bed secured, posse acquired, and only the small matter of airfare and time off to deal with. Fine fine.

Phase two was one I’ll call, “Oh, you’re doing to DEF CON? That’s… interesting.” This phase happened after reservations were in place, when I told friends and colleagues in tech what I was planning. The reactions tended to be similar: a mix of understanding why I’d consider doing such a thing and affectionate concern based on knowledge or experience of some shitty person or people in their past. These reactions came in a few flavors:

  • I went a few times but then had to stop [insert ominous look here]
  • I wouldn’t go there if you paid me, not any amount
  • I hope you enjoy yourself, but be careful (and you’re not going alone, right?)
  • You are not allowed to take any work assets with you on this trip

(This last one came from one of our bosses. We complied.)

Research nerd that I am, I looked up “How to DEF CON” but largely found articles aimed at, well, stinky boys. (#NotAllStinkyBoys, I know, but you should talk to other, more prominent bloggers about that if you want to shift those optics.) I did come away with some good, more general advice, most of which echoed what had been said to me already. Things like:

  • Turn off the wifi on your phone
  • Probably just keep your phone on airplane mode when you’re in the thick of things
  • Maybe keep it in a faraday bag, while you’re at it, come to that, and still wipe and restore it when you get home, because you never know
  • Just leave it at home, assuming home is in another state
  • Trust no ATM in proximity to the conference (though casino floor ones might be ok, heavily monitored as they are – but if you can get by without, do that)
  • Don’t bring your laptop; bring a burner if really must
  • Probably bring a burner phone too, really
  • Bring enough cash to exist on, if you can, and maybe don’t muck around with credit or debit cards (though opt for credit if you must, because they have fraud protection)
  • It’s more about people than the sessions
  • Go to parties and side events and games and whatever else crosses your radar. Here’s a good place to start getting a sense of what’s possible for the 2017 one.

All good and well, sure. What I didn’t find was advice to match the portents from friends based specifically on my situation as a woman heading to DEF CON. So, in the way of the semi-reformed content marketer that I am, I decided to put together my own resource. So, here you go: how I, a woman, an engineer, and a hard introvert with a low tolerance for dickheads, recommend approaching DEF CON.

Packing for DEF CON

The Las Vegas setting of DEF CON means that you’ll be walking between ovens and refrigerators most of the time. This is a great recipe for feeling a little uncomfortable and a little gross during most of your waking hours, but you can plan around this.

For general packing, here’s what I recommend.

  • Bring twice as many pairs of underwear as the number of days that you’re staying. Even when it isn’t warm in those big event spaces, it’s still close; you will appreciate the option to swap out layers without taking anxious inventory as you near the end of your trip.
  • Wear clothes that breathe. Beyond that, of course, wear what you want. Some women find it useful to go stealth in a hoodie and jeans; I found it oddly fun to be as dressy there as I sometimes am in normal life – but I also appreciated having options depending on the feeling of the day. Decide what will be more likely to make you feel comfortable in the context of a very busy, very distinctive conference, and you’ll be fine.
  • Excedrin. I’m headache-prone, so that’s a given for me.
  • Sleeping pills, if you roll that way. I like an OTC sleeping pill when I’m not sleeping at home. This last year, I, a person who lives alone for sanity-keeping purposes, shared a hotel room with three other people. It was worth cutting off any booze around ten so I could safely tranq myself to sleep and be both smart and sociable the next day.
  • And for your day-to-day pack, I suggest a not-small water bottle (at least 750ml), more snacks than you think you’ll really need, a hand fan, and a notebook and pens. You will learn about all sorts of weird shit, plus Twitter handles to follow, sites to look up, rad repos, and talks of yore. Have an analog way to record them for later.

Planning And Attending

Secure your tech.

See the earlier suggestion about burner laptops and/or phones and/or faraday containment devices. I learned while I was there that Bally’s told their entire staff to keep their phones off while working that weekend. I originally went on airplane mode for the first couple of days until coordinating with my friends got very annoying; then I used cell data only. Things went fine, but I plan to get a burner in place for this year. Figure that you’re going to be going fairly analog in the middle of a tech-centered conference, plan accordingly, and you’ll be fine.

An exception is if you want to participate in a CTF event or a tutorial – you’ll want a proper laptop for that kind of thing. Consider a Chromebook with Kali with no stored login information, with a plan to wipe it when you get home. And if you’re not sure of what a CTF is or are feeling a little daunted, this writeup of a rad engineer’s first one is pretty exciting.

If you do decide to bring a laptop, you can take your chances with official conference internet. Bear in mind that you need to set it up beforehand; go here for more details.

Walk fast, or make plans based on geography rather than strictly interest.

I don’t know how the rest of you manage to get to the talks you want, if they’re far away from each other. I sped across the Bally’s gaming floor over and over, from front to back, from side to side, from Vegas to Paris and back, going from a far-off upstairs meeting room to an upper-floor set of executive suites to a trio of enormous function rooms off of hallways made to look like a more restrained Versailles. I was a little more session-motivated than most people seemed to be (including the friends I traveled with), but the time between sessions made that difficult. If I didn’t walk fast and didn’t enjoy walking fast, I would’ve seen far fewer things.

Figure out where the water fountains are.

And keep that big-ass water bottle full. Plan on refilling it every couple of sessions. I’m not sure what it is about being around so many other people in close proximity that brings biological needs so much to the forefront, but it does. Routine dips in hydration or blood sugar become so much more pressing, even while surrounded by water fountains and stores only too eager to sell you supplies. Plan ahead, and your brain will work better for you.

If a party sounds cool, just sign up.

Lots of companies and villages and groups have parties, minicons, and other events. If you happen upon one that sounds good, and they request an RSVP, just do it (unless it’s a tutorial with a small capacity – then be cool, please). Everyone is dashing between five things most of the time while they’re there; might as well ensure your name is on the list.

Research sessions ahead of time; do multiple-choice selections in the moment.

(If you care a lot about sessions, of course.) To ensure you see more of what you want to see (because you will not see it all), I’d suggest culling possibilities ahead of time. I liked the app for this, as it shows you everything across the villages and the main con itself, and it lets you add competing sessions to your schedule for easy picking. There’s also the physical book you get when you check in – and the conference website, of course. Note everything that sounds interesting. Particularly if you’re new, you’ll probably learn something regardless of what you select.

However, let the final selection come in the moment, when you’re on one side of the conference space and you have to choose between staying put and sprinting across a casino floor; when it’s 20 floors up, and the lone functioning elevator is not behaving; when the line for a session is full 30 minutes before the doors open. Give yourself a few options for each timeslot and then let the conditions of the moment dictate what you actually try to do.

My favorite sessions fell into a few categories:

  • Social engineering
  • How to break shit (the Bluetooth lock session was a highlight)
  • Fun with Python
  • Feds answer questions
  • Where current events and infosec meet (like the one where a nice Danish man talked about the Ashley Madison hack and online information hygiene)
  • Mostly we’re fucked (that is, the intersection of “how to break shit” and IoT things)

I’ll likely stick to those same themes this year, but I’ll try to go outside of them too.

Be open to new things.

Skills, smells, weird social skills and experiences. There aren’t a lot of spaces like this on earth, so roll with it when it makes sense. You can be in predictable company later.

This was a big part of what friends in the know warned me about. It seems like everyone who’s gone enough times has a story of someone acting like a most memorable piece of shit. I had a couple brushes with annoying sexist nonsense, but clearly not enough to dissuade me to come again this year. (My current prediction is that I’ll get to come three times before something really obnoxious happens, enough to make me say the hell with it and stick to B-Sides, but I look forward to being proven wrong.) However, fucked-up things, of course, aren’t necessarily tied to gender. A male colleague of mine stopped going around DEF CON 12 when he saw someone dancing drunkenly with a live firearm at a party. We all have our limits.

Don’t go to pool parties.

(This is clearly highly subjective, and the friends I went with may likely disagree, but.) Not all dudes (#NotAllDudes) werewolf out at these very guy-centered events with bars, but enough do that I don’t find it worth it when I could be doing anything else. If you also have a certain ungenerous tolerance for risk, go literally anywhere else, because if that place sucks, you can leave much more easily than if you’re in a wet swimsuit. My tolerance for uncertain behavior in social situations out of my control has a pretty hard limit. This is outside of it. You can, of course, decide based on your own “hell no” scale.

If you can go stealth, eavesdrop on non-conference folks.

There are people – unfortunate people, innocent people, sweet summer children – who planned their Vegas escape not knowing what they’d be encountering. They thought they were there to see Cirque and eat crab legs, and they ended up navigating hordes of goons for 14 hours a day. They are hilarious and wonderful. I recommend lingering by customer service or at the buffet to overhear what you can. I felt considerably more badass after overhearing a few minutes of speculation of just what the hell was going on with all the people with skull badges between a clerk and a customer at the Paris casino loyalty club desk.

Seriously, stretch.

Even (especially) if you find yourself in the same room for several sessions in a row. Get up and stretch, especially your quads. You’ll have several days of this. Take care of yourself.


It’s worth it to stop by the vendors. The stuff for sale last year typically fell into one of three categories: learning, mayhem, and novelty t-shirts. The first two are pretty alluring to me, and I saw things for sale that one doesn’t typically see anywhere else. It’s worth budgeting for, ideally in cash.

My souvenirs from last year included a pen testing book from No Starch, a couple handcuff shims (you never know), two clear padlocks, and a set of lockpicks for the friend who watched my cats while I was gone. I was pretty satisfied with this, and this year I’ll probably budget for an Ubertooth or something else similarly fun and shiny.

It’s normal, with conferences, to be tempted to wait until the last day to go buy things to try to catch discounts, but at DEF CON, stuff will sell out. If there’s something you really want (and really don’t want to buy online with a credit card), just get it the first day. Nothing is overpriced if you’re satisfied with what you bought and happy with the experience.

One exception is if you wear a smaller t-shirt size. Sizes L and bigger sell out pretty fast, so if you wear one of those: buy sooner. If you’re more of a small or medium: late Saturday or anytime Sunday is a fine time to get your smaller DEF CON shirt with a little break in price.

What I’ll Do Differently This Year

I was pretty satisfied with how last year went, particularly considering the warnings I got. That said, there are a few things I’ll keep in mind when planning my 2017 trip.

Get there on Wednesday night.

Last year, my friends and I used the typical metric of nonprofessional, more culture-centered conferences and planned to arrive on day two. This meant we had access to zero workshops, missed a bunch of DEF CON 101 stuff, and spent more than a day with the flimsy temp badges they give out once the rad ones are gone. It was not an unreasonable approach, but it was wrong and a bit of a bummer. This time, we’re getting in on Wednesday night.

Figure out parties and villages to visit ahead of time.

Last year, though I was told about this, I didn’t quite get how much of DEF CON is in the side events. Deep down, I am basically Hermione, so the idea of paying for a conference and not going to as much of its official programming as I reasonably could just did not compute. This time, I’m going to ask my friends to help me be more fun than comes naturally to me sometimes.

Tell people who say stupid things to fuck off.

I’m really only thinking of a single situation here, but I was still in “I’m new, I’m a guest in this place and trying to learn it” mode, so I didn’t say anything, and clearly it still bothers me. So: I’ll say something next time. If someone else feels safe to be a little obnoxious, I’ll remind myself that I have the privilege to risk the same. There were 22,000 people there last year. I can tell someone acting like an ass to get the hell away from me, and I’ll go try my luck with the other 21,999.

What I’ll Repeat

Roll with a group of women.

Our lady quad occasionally picked up other lone women like an awesome Katamari, and it was a great way to meet interesting people. It was easier to take chances and drift away for a few hours because I knew I could rejoin my group of understanding friendlies whenever I needed to. (If you’re a woman going solo to DEF CON, feel free to say hello. We would love to meet you.)

Revel in the very short women’s bathroom lines, because when do I ever get to experience that otherwise. (Infosec and infosec-adjacent conferences, that’s when. I don’t like what it’s a symptom of, but I’ll take a very small bit of ease in the meantime.)

Stay nearby, but not in the conference hotel itself.

I liked being able to use wifi when I tucked in for the night (though there are reasonable arguments that even this is not a great move), and there was something calming about leaving the middle of the action and being able to turn off my situational wariness.

In Conclusion

I’m an engineer with a love of people breaking shit, making shit do what it was not originally intended to do, and smartasses in general. I liked DEF CON. I’m looking forward to it again – enough to deal with Las Vegas in bloody July. However, it’s very much its own weird animal. It’s a self-selected group that’s different than any I’ve ever circulated amongst before. But, like most groups of humans, most people are benign, some are interesting, some are “interesting,” some are lovely, and some are viruses with shoes. I’d say, in going to DEF CON, your chances of having something unpleasantly memorable happen are higher than among the average population, but not so high that it’s worth skipping if you also like the things I listed above.

There are situations, though, that don’t fit neatly into the suggestions and categories I set out above, so I’ll leave you with some miscellaneous observations from my notebook to place you in the setting in a more immediate way.

  • 98.6 degrees in here, and a pervasive recurring smell of farts and accumulated humanity.
  • Opinionated, reality-divorced emitters of skin clouds and biome signature
  • Apparently a room full of dudes will not understand why you shouldn’t text your dick to someone
  • The current version of the US military interrogation manual is online and freely available
  • 3 pm: am mostly sure I am not the source of the back row funk cloud here. 3:30: rest of row left. Less sure, although funk cloud also left, so…
  • Being a woman with a wordsmith background and a tendency to observe behavior may make me an ideal mole-type. Stereotypes help us defend ourselves (or have), but we can still exploit that shit.
  • Social engineering as a woman, at a talk by women, for surprised men

However, I hope, if you’re tempted, you’ll just go for it. Come say hi if you do. And, while you’re there, try to sleep enough, don’t get too fucked up and hungover, and keep your water bottle full. And, with luck, I won’t be back here in August or in another year or two, writing about how all the warnings were right. With luck, you’ll have a good time too, if you decide to go for it.

A Little More Information, if You Want It

If you’re still figuring out how to do this, here are some more resources for you.

<PennsylforniaGeek/>: The Road to DEFCON

This is the detailed post I was looking for last year. You get to have it, at least.

Reddit: a good breakdown of likely costs for the whole event

There are ways around some of these things, of course. I used Southwest points for my flight this year and am splitting a nearby Airbnb with friends, so we have more room for less money. Last year, I tried to have one good, robust meal out per day so that I wouldn’t feel too messed up from Clif bars and breakfast buffets. Figure out what you need to feel like a functioning human; budget for that. Find a roommate online if you’re broke and brave. There’s a good chance you can make this work, if you’re willing to hustle a little. 

An outsider’s view of what all the fuss is about

It gets fucked up sometimes. One of my remarkable bits of good luck is that malignant dudes mostly let me live my life. Other women are not so lucky. This post gives you an idea of what another side of the experience, quite different than mine, can be like. Take care of yourself, please. We need you.

Linked above, but worth repeating: an overview of how wifi works and what a Pineapple is, with a list of event-specific precautions on slide 17.


*I like to travel, you see, and I can get very wrapped up in planning it out.