BSides SF 2023 talks!

Graphic of a brown rectangular cookie with a heart design and a cute smiling face

I contributed two talks to BSides SF 2023.

The first, with Amy Martin: No Adversaries: Getting Users on Your Side for Tough Transformations.

The second: New Apps, Good Snacks: Effective Threat Modeling for New Territory, featuring a dizzying array of kawaii food art, which I got from Mount Pleasant Designs on Etsy.

Decks for both talks came from SlidesGo. They’re beautifully designed, kitted out with all sorts of assets, and I will never go back to slumming it with default Google Slides templates.

BSides SF 2023: New Apps, Good Snacks: Effective Threat Modeling for New Territory

This is the blog post version of my other BSides SF 2023 talk. Yes, I’m tired. Video to come. Slides are here.

Threat modeling is all about approaching a new product, system, or situation and evaluating it with fresh eyes. Dev and product teams get immersed enough in what they’re building that excitement and familiarity can both make it hard to identify potential risk. Enter the security team!

Thing, security folks can get used to things too. If, for instance, you find yourself reviewing similar features over and over again, incremental changes to the same feature, or otherwise getting too accustomed to the state of things, it can become harder to stay sharp and see what’s in front of you.

We’re going to walk through a situation and explore how to push against that feeling, pull back beginner’s mind, and continue to offer useful advice for the teams that depend on us.

What’s threat modeling?

Graphic of a bowl of noodles with chopsticks sticking out of the top and a cute smiling face

I like to describe threat modeling as the application of structured thinking to my own natural paranoia. That sounds flip, but I mean it: something I have in common with a lot of people who do this kind of work is that I naturally imagine worst-case scenarios, things that can break, and other potential problems. It makes me great to travel with and has contributed to my long, loyal streak as a stellar therapy client. It also means that I love threat modeling.

There are lots of formal threat modeling frameworks. If you haven’t explored this before, I suggest looking up STRIDE, PASTA, and MITRE ATT&CK. (Cybersecurity loves an acronym. If you didn’t know, now you know.) There is no one perfect framework, and cybersecurity also loves to argue, so everyone has their own opinion on what’s good and bad. My feeling on it is that if it works for you, it’s great. If the people you need to evaluate risk for understand what you create by applying the framework to their situation, it’s great. If anyone acts rude to you about your choice of framework, send them to me, and I will deal with them.

For my work as a security partner on a product security team, I don’t use anything as structured as that, instead just moving from part to part and looking at design choices and possibilities. If I worked somewhere more formal or had to talk to engineers and product folks less happy about working with me, I might reach for a more structured tool. Instead, I work less formally, and the fictional exercise we’re about to embark on reflects that.

Also useful to know: my security practice (and that of my company, which is why I work there) is more about suggesting and guiding rather than issuing stern dictates. I point out risk, offer mitigations, and record unmitigated risk for our own records. It’s a consultative role, which you’ll see reflected in how I talk about the threat modeling we’re about to walk through.

This is reflected in a structure I use sometimes for my advice: do, try, and consider. Recommendations labeled do are the closest we get to saying you must do this. We want the teams to take them very seriously. Try is a little less serious: give this a shot, please, and see if it works. Consider just tells them something that we think could be useful to them. Keep this idea or principle in mind; it may satisfy something you need in the future. I won’t use this for all the recommendations to come, but it pops up sometimes, and maybe you’ll find it useful too.

Welcome to SnackTracker product security!

Graphic of pepperoni pizza with a cute smiling face

It’s a lovely morning in the snack loyalty startup, and you are a horrible wonderful product security engineer. We work for SnackTracker, which has had a popular app for several years—but they’re ready to go bigger.

They want to go from their mobile and web apps to partner with fine snack shops across a carefully selected test region to offer in-person check-ins and bonuses to snack fiends who already enjoy using the SnackTracker app to track snacks they’ve eaten, wish list snacks they long to try, and find new snacks and places to buy them. Our product team envisions a tablet where users can log snacks in person for those wish lists and get big rewards by checking into businesses, including during special events.

We’ve previously expanded some of our existing features, but it’s been a while since we’ve had something that’s this much of a departure, so we want to do this right.

Up until now, there have been two kinds of users: shop owners and snack fiends. Shop owners need to see customer details, snack trends and reviews, and receive and cash out loyalty points. Snack fiends need to see their account details and point balances, learn about events and new snacks, enter and update their snack preferences, and adjust communication details.

Both user types logged into the same mobile and web apps, and any differences in authorization were connected to the user type, which affected what was displayed after login.

Up until now, all threat modeling and security guidance were done in this previous context. We could make certain assumptions:

  • Users would enter their full username and password to log in, and only one to two users would ever use a single device.
  • Authentication with MFA wasn’t a hard sell, because those snack points are popular and because we ruthlessly bribed incentivized adoption by offering bonus points for MFA setup.
  • Sessions could be valid for a month for snack fiends if they checked that they were on a private device. For shop owners, because there’s a transactional element that translates to real money and because their devices were in the middle of their shops, sessions would end after a minute of inactivity, much like a credit card or bank app.
  • Snack fiends did not need to log out after using the app, and we didn’t need to log them out often.

This will all change with the dawn of SnackAdder. (Product likes Rowan Atkinson, and frankly we can’t blame them.)

The in-person team is psyched about the tie-in opportunities here. They know they want a simpler way for users to check in that doesn’t involve using the long, complex, unique passwords that obviously our superior users use without fault. They also think having people use MFA to log into a device they don’t own for a simple interaction will be a hard sell, particularly as the users will likely only interact with the SnackAdder device for a minute at a time.

The user types and authorized operations associated will change too. The snack fiends want to log snacks and get rewards. The shop owners want a way to say “this device is connected to this business and location” without having more work to do or a new thing to worry about.

Our goal: offer actionable security advice to a product team that’s excited to move fast, protects our users and their data—and keep this team happy to work with security in the future.

Our guidance until now

Graphic of a pink cupcake with a dark pink cup, a cherry on top, and a cute smiling face

It’s useful to know the history when dealing with a team. This team is new to SnackTracker, but it includes engineers and product folks who are on teams we’ve worked with before. Past context matters when you’re working with the same engineers across time, because you get the chance to either give them deeper knowledge when bringing up something familiar again or to point out that something is new and may contrast with things they already know.

What have we told these engineers previously when consulting on mobile and web app features of the past?

  • The usual guidance around encryption: in transit and at rest, please, and use an encryption cipher that’s still considered a good industry standard. When we say that, we provide a link to part of our security guidance library, which includes a page about what’s considered good enough now. We update these pages as needed or every six months, whichever comes first.
  • Authentication works similarly: our advice has been to use our company’s go-to recommended authentication methods, which is overseen by another team. Typically, we advise against too much improvisation around both encryption ciphers and authentication.
  • For authorization, we have established user types and permissions. We also strongly recommend that authorization is enforced in code, so that certain operations are dependent on a permissions check rather than “we just don’t show the admin panel link in the navigation for snack fiends.”
  • For user IDs, and all other kinds of IDs, we recommend using UUIDs rather than anything sequential that can be enumerated or otherwise brute forced.
  • We have a list of suggested and preferred software libraries and mandate use of Dependabot in our repos.

We also offer guidance around privacy. While this is often more of a legal concern, we have a good relationship with the SnackTracker legal team, which means product security can offer some guidance around good data practices with respect to the laws of various states and countries. We like to do better than that, though, and also promote a philosophy of not asking for any data that isn’t strictly necessary for users to use the app. Do we care about someone’s gender if they just want a snack? I hope not! If we don’t need it, we don’t ask, and we don’t store it. We also advise our teams to have a data retention policy. Also, no one in marketing has ever argued with us about this, because this is my story, and I can pretend it’s the better world I believe we’re capable of having.

Which brings us to the future: let’s offer some guidance for SnackAdder.

Threat modeling and new horizons

Graphic of a dark pink heart-shaped treat on a stick with sprinkles and a cute smiling face

Let’s look over this new system, using some of the perspectives we’ve used in the past.

Data use and collection

We always want to ask what data is being collected in a product or feature, where it’s going, where it’s being stored, how it’s being stored, and what else might happen to it. Is it staying entirely within servers we control? Does it need to be send to a third party? Is it being appropriately treated considering its sensitivity?

In the case of SnackAdder, the product team is proposing collecting both less data than our existing product does and more. Because the tablet will only log check-ins and snack preferences, a lot of personal data isn’t even on the table. However, location-based check-isn are something we’ve discussed before. Being vehement defenders of our snack fiends, we’ve previously decided as a company not to collect location data from personal devices—slippery slopes and all. However, this device will be connected to a location, and use of it means users consent specifically to connecting this location to their account. Because of this, we offer a warning to treat this data carefully (including, ideally, a timeline for expiring it off a snack fiend’s account) but say that going forward with check-ins is fine to do, particularly as it isn’t mandatory for logging a snack to a wish list.

Authentication in the wild

We know the team wants to use a different kind of authentication than we find usual at SnackTracker, and so possibly the biggest challenge of this threat model is figuring out what we want to recommend. Being inventive is great; being inventive in bad or unhelpful ways, reinventing mistakes others have already learned from, is not.

The team wants to do something other than requiring the user to enter their username, password, and potentially MFA OTP too, all while a line might be forming behind them to interact with our incredibly popular app’s new interface. Fortunately, the team’s already done some research, and we get to walk through their proposed options.

A PIN has familiarity on its side from its frequent use in ATMs and phones, but it has the downsides of being forgotten and having much lower entropy than a proper password, even if we require six or eight numbers instead of just four. Discussing it within security, we also feel a little queasy having a second credential for the same account, which might confuse users in a way that has security implications. This could work, but we have reservations, so let’s look at what else the team has come up with.

The next option is what I call second-screen authentication. Despite its common use in signing into streaming services, I haven’t found a single consistent term for it. (If you have one, let me know.) It’s a different version of the increasingly common method of web app login where you enter your phone number or email and receive an OTP to enter to log in. In our case, one device provides a code, and we enter it into another device with an authenticated session in order to link our activity in the first device to our account. What the team envisions is entering your SnackTracker user ID into the tablet, which will generate a code that you enter on your phone. This is certainly less fiddly than the first method, but it’s still a little involved and has the additional risk of there not being strong phone signal or wifi in the snack shop. Still, this has potential.

Finally, the team proposes logging in via a QR code generated on the user’s device via the SnackTracker app. If you wanted, they add hopefully, it could be dynamic and regenerate periodically, or every time even! (They really want this project to get moving.) This lets the user log in fast without typing anything in.

We like this option, particularly since we plan on providing additional guidance on sanitizing input, with a special dive into what can be concealed in a QR code.

Authorization in a new context

We realize quickly that there are several ways to achieve what the team is requesting. The admin needs to be able to connect the tablet to the store’s account in order to claim the ability for snack fiends to check in. The snack fiends need to be able to scan snacks and check in. Allowing just partial permissions for a user on a new device, though, gets interesting.

New use of a role can be simpler than creating or changing a role and its permissions, though. It’s always a good idea to get allies early if you’re making changes like this, because a fairly settled product can make it tough to alter this stuff. It can be too rigid, which means having to do difficult persuading and possibly equally difficult technical work, particularly if your codebase is older and has gotten a little brittle. Possibly worse still is when it’s too easy to change these features, because that can result in bloated, unfocused sets of permissions where the difference between two user types might not matter anymore. Think of it as the chaos katamari. Either way, we need to approach this carefully.

The admin needs to log into the tablet, but we don’t want any other admin features accessible. We think about this and realize a useful mental model is to think of it less like “a known user type does a new thing in a new way” and more like a third-party integration, like when you scope a personal access token on GitHub. Your account has a bunch of permissions, but the token may just have one.

The snack fiend role needs to be approached differently too. We suggest limiting the actions a snack fiend can take on the tablet to updating: adding a snack to the wish list, doing a check-in. No deletions, no account details. And on personal devices, we’ve allowed long sessions for snack fiends; for this, we recommend a session lasting either 30 seconds or for one operation, whichever comes first.

The admin sessions need to change too. We know from product research that we can’t rely on a shop owner being physically in the store every day, which means we can’t expect them to create a new check-in tablet session daily. We compromise and give a try-level recommendation of two-week sessions, offering to discuss it further if they find this is too restrictive for our pilot group. We add a do-level recommendation too: admins need to be able to invalidate their SnackAdder sessions in case someone does a runner with the tablet.

Do snack shops have IT?

From product research, we have a sense of how snack shop owners are used to using SnackTracker in their stores: a counter-mounted tablet next to their point of sale, where they can log user purchases for loyalty points when snack fiends request it. What we don’t know is who maintains that device.

Many small businesses don’t have a dedicated tech or IT person. Instead, in a way you may be very familiar with, it might be delegated to “tech-savvy” friend or relative, or anyone who’s willing to read instructions and troubleshoot. All this is to say that we can’t expect a ton of available effort to be put into setting this tablet up, and some additional guidance may engender a lot of good will.

“Just spitballing: what if we manufactured the tablets ourselves???”

Product enthusiasm is a gift, and occasionally we need to rein that gift in a little. The team saved this one for last: they’ve been thinking about tie-in potential for creating the check-in SnackAdder devices and selling or leasing them to snack shop owners. On the plus side, they say, this means we’ve have total control over the device and what’s on it.

It’s true, and this can be really helpful in a security context. However, it would require hiring for skills we haven’t had to seek before in order to do it safely and effectively, which is pretty concerning. Our initial thought is no, followed by oh wow no, and the emerging do-level recommendation is the kind that I’d escalate in security management to make sure there aren’t any diverging feelings, because a single product security engineer or team saying, “Hello, we’d really prefer you didn’t go into a completely new area of business” is a large statement and one that’s best delivered with backing and solidarity. Fortunately, we find that the powers that be also don’t want to get into proprietary touchscreen interfaces, despite the potential for immersion and branding.

The team takes this well, thank goodness, and instead we suggest that the team consider offering security guidance to shop owners who will be using and maintaining a new device in very new circumstances. Suggesting updates and sending along security advisories could be worth our while, particularly as we expect some secondhand device use in the mix.

Awesome! Job well done, all. Let’s look at how to relay those findings.

Don’t kill the messenger

Graphic of a taco with a cute smiling face

We want to preserve our existing good relationship with these product and engineering allies. Along with delivering a thorough, well-structured report, here are some ways to make that work better.

Clear recommendations

I mean the kind of clarity that it takes a few drafts to get to. We want it concise, easy to understand, and ruthlessly clear. Spend some time on it, and if you feel uncertain about whether you managed it, ask someone else (especially someone like who you’re trying to reach) to look it over and give an opinion.

Ground recommendations in security principles

This is useful because it means our recommendations are more solid, but it also ensures greater consistency across time and gives teams the chance to learn what backs our suggestions. After a few reviews, some teams may begin to guess what you’ll consider an issue and will fix things before you get a chance to see them.

Share the knowledge

We don’t want exhaustive further reading to be required to understand or implement most of our recommendations, but sharing sources can be really powerful, especially for people who are interested in security. Having useful further reading is one thing that led me into security engineering, and I like leaving that path for others.

Show up

I know, I’m suggesting you go to more meetings, but not all of them. This is a powerful tool if you think your recommendations may not be taken as you mean them, if you’re not confident that your approach is right for the team, or if you’ve gotten any sense of uncertainty from the team. If things feel shaky, don’t rely on third-party interpretations. Your recs may be clear, but sometimes there’s nothing like the clarity of you presenting them yourself and being present to answer questions.

Position yourself as a security ally

It’s useful to remind people periodically that security’s advice is made with the intention of making the product team’s life better and protecting our users too. I like to mention that they’re the ones who will receive the 3 AM phone call that something’s gone terribly wrong. I will be happy to help too, of course, but I may not come online until west coast business hours. Better still is to avoid the crisis altogether.

Seek feedback and mean it

We can’t grow, and we can’t know that our recommendations are doing what we intend, without asking for feedback. If you ask more than once and in different ways—survey, formal feedback request, verbally—people will eventually believe that you really really want to know and will be more likely to tell you. This is a gift; express thanks accordingly.

Takeaways

Graphic of a red and yellow bucket with a cute smiling face with three chicken legs sticking out of the top
  • Push against complacency and cultivate beginner’s mind, even if you keep seeing similar features over and over.
  • Think creatively to explain new things well; I like using metaphors until I can see the person really gets it.
  • Reduce complex technical ideas to key guidelines, with no noise.
  • Encourage ambitious technical work rather than penalizing it with heavier overhead (and make it clear that you share priorities with product and engineering—we all want to make something cool!).
  • Ask how you can do better next time, and mean it!

BSides SF 2023: No Adversaries: Getting Users on Your Side for Tough Transformations

This is the blog post version of my talk with Amy Martin, Project Manager, SF Digital Services, on April 22, 2023, at BSides San Francisco. Video to come. Slides are here.

Every security initiative has two sides. One is purely technical: we need to prevent abusive emails from getting into company inboxes with a product, or we need a code scanner built into CI/CD to look for common insecure patterns in new pull requests. All of these, even those that seem purely like a matter of automation once they’re set up, also have a second side: the people involved.

We’ve both dealt with the human side of implementing security-related processes. Breanne has grappled with it as a site reliability engineer working to refine IAM roles and as a security engineer creating and refining processes around evaluating and approving Chrome extensions and GitHub Actions. Amy has encountered it while pushing technical advances needed to get ahead of software EOL dates and standardize CMS usage among teams who depended on stability to do their jobs. Both of us have learned the value of creating change through policy and process without making everyone hate you.

To that end, we’re using the term users a little differently than usual. Often, this refers to end users, people using your company’s product, the public, or anyone on the outside who needs what we create. What we’re discussing here certainly affects them too, but our focus here is on internal users: partners within our organization, people who depend on our work to get their work done, and people who need to buy into what we’re doing in order to make a better, more secure experience for them and everyone else. Without them, no security transformation will ever truly work.

Two failures, and hope

In January 2022, Amy left her public library job and joined the Delivery team at San Francisco Digital Services. They needed to move about 80 websites off their Drupal 7 platform and onto SF.gov, the City’s unified and accessible website, in advance of the planned end of life of Drupal 7. That was meant to be in November 2022 and did not happen, but that’s another story.

When Amy started at Digital Services in January, about a third of these departments didn’t know yet that they had to move their website by November, and many of them liked their current, government-speak oriented websites just fine. SF.gov is built for usability and accessibility, and plain language is key—not government speak. Every site they moved required content redesign, and Department staff had to be trained to do the work

How can government Digital Service staff convince civil servants and other agencies to embrace digital transformation in the face of a big unwanted project?

The short answer is that they only did part of that. They moved the websites before their deadline, and they got people started on building user-centered websites and writing in plain language. However, most did not continue that work. Some of them even got rid of the content that we made for them and pasted their old content back in.

Although they met our goal of moving the websites by the deadline, they found that they had fallen short of setting government agencies up for future success with user-centered websites

Breanne’s story is from a few jobs ago, before security engineering, when she was an SRE with an interest in security. The company saw its first rash of phishing attempts: texts and emails posing as executive staff, asking for sadly normal things: Google Play gift cards and other easily sold gift cards, urging the recipients to please move quickly, because the gift cards were needed now now now.

Breanne and a colleague teamed up to write both a presentation and written training to let people know what these attempts looked like and to establish company norms around communication so that it would be clearer to everyone what legitimate company communication looked like.

The company had fewer than 100 people then, so it made more sense to write a single version of the training that would serve both the engineering department and other staff, including sales and marketing. The distinction here is less to do with who is or isn’t “technical” and whose jobs depend on answering unknown numbers and replying to texts and emails from strangers.

How could two engineers write a workable, concise anti-phishing guide for everyone? How can it go wrong? In our case: not everyone is experienced at writing precise, well-grounded documentation about a complex subject for a varied audience. In our case, they got the job done, but they bumped into some of the limitations of trying to write something about a very specific topic that’s meant to be for everyone.

How can a project’s start doom it?

It’s good to start as you mean to end, which means that thinking of what makes a project like this successful from the start goes a long way toward ensuring its success. So what gets baked into a project plan that can doom it to failure?

Maybe your planning doesn’t entirely account for reality. Maybe your engagement doesn’t account for the real needs limitations and motivations of your users
Or maybe it’s meteor from space or other things outside our control. (This probably felt way less possible before March 2020. Now it’s a pretty good idea to write “large scale crisis” into every risk plan.)

We can’t control everything, but we can control some things. What can we do?

Remember that your partners are users too. Keeping this in mind prevents a lot of issues. Like with end users, it’s worth your time to get to know your internal users’ motivations, limitations, and wants, as well as their hard NOs: things they absolutely will not tolerate.

Seek the similarities. See what behaviors, preferences, and common experiences make groups of users start to cohere in your interactions with them. If your team is having trouble wrapping their heads around this, consider using a lightweight version of personas. Rather than the more involved versions used in marketing and UX, we don’t need to know anyone’s educational level, income, or other demographic information, though it’s not unlikely that some of these divisions will fall neatly between job titles or departments. Instead, look at what’s going to affect your plans:

  • What tools are they familiar with? What tools that you’re asking them to use might they not have encountered before?
  • Do they work with security a little or often? If they hear from security via a direct message or an email, will their first thought be that they’re in trouble?
  • Are they amenable to change or resistant? If they’re resistant, why? (This is a really important question to answer.)
  • What motivates them? What’s their definition of professional success?

Avoid stereotypes or unhelpful guesses. Making them too artificial or overly specific is also more likely to harm than help. If it’s just you and a colleague working on this project, it can be plenty to discuss the groups you’re seeing form and plan based on that. However, if you’re doing a longer project or perhaps a larger body of work, you may wish to write them down to share, shape, and reference.

Encountering resistance? Slow down. When you’re starting to roll out your initiative, you may encounter resistance. If the resistance is starting to threaten the velocity or the success of what you’re trying to do, tap the brake and regroup. Form a new hypothesis or think of another strategy—and then seek feedback from the group that wasn’t feeling served before. This is extremely important to do, because we can’t afford to guess in these situations. Fortunately, when you deputize people, they often want to step up and help. If I think someone will be reluctant, I tell them that they hold a key to my success that I can’t under any circumstances produce myself. It’s important to mean this when you say it, but fortunately, it’s completely true, so that shouldn’t be hard.

Make it easy for people to do what you want. Breanne talked about this in her BSides SF talk on documentation last year. Some of the same tactics apply here too: bribery is a legitimate option to overcome some objects. Everyone likes stickers and snacks. However, we can do a little better than that too.

Don’t make users wade through extra reading, video, or other hurdles to do the important thing. Make what you need them to do succinct, clear, and as easy to do as possible. Clear instructions help a lot, as do easy contacts to ask for help. Have the critical stuff up front if you’re asking people to take in information. Executive summaries, a block of text or page or two at the front of a longer document, make sure that busy people have an easier time learning what they need. In longer trainings, the occasional tl;dr block can let people with different attention spans hopscotch through without missing the critical stuff.

This is also a great time to think again of accessibility. (Although it’s always a great time to think about accessibility.) It’s the right thing to do and often legally mandated, but it has one more useful effect, which is that it makes it easier to reach different kinds of learners too. Breanne can hear a video but vastly prefers reading, so a video with transcription reaches those with hearing difficulties as well as those who just like reading, those who can’t turn the sound on, and those who just need to CMD-F or CTRL-F to find the one detail they need to do their job.

This is also a good time for another look back, because history can teach us something. Previous efforts may not be perfect, but they may contain practices and approaches that people are already used to, which means they won’t have to do extra work to learn what you’re doing. It may serve your project well to add additional steps and methods of outreach, but if part of the old approach is getting you a good amount of the way there, it’s likely worth keeping. Security doesn’t have the luxury of perfection a lot of the time, but we can do an 80 percent job in several different ways and achieve something closer to it than we would otherwise.

When you’re asking about the history of projects at your organization, make it clear that you actually want to hear real opinions and that there won’t be retaliation or sore feelings if someone confesses a negative opinion about something that came before.

If all else fails, come back to bribery: security skills look good on resumes, even if the person doesn’t work in security. Try getting people on your side to help them master something important or lucrative. This creates allies.

Okay, but why do we need to think about this at all?

Why do we need to think about this? Aren’t people used to security initiatives and communication around them being subpar or even extremely disruptive? Security is important, and shouldn’t that be enough?

Yes, but—people are part of the system too, and if you’re looking to change the system in an effective and lasting way, you need to work the people part of it too. Both people and the non-people part of systems work better if you give them what they need.

People will not adopt the changes you recommend if you don’t make it easy, explain it well, and make the preferred behavior the easy path. Technical motivations have human implications. It’s just true, and until we reach the robot-controlled singularity, this is just how it’s going to be. We have to adapt.

No matter how brilliant your new security solution, clever process, or other big idea is, it will not work as well as you want it to unless people are on your side. Worse still, you might get a false sense of success as people initially go through the motions of doing what you wanted… only to revert to old habits the next time work gets stressful. We’re looking for something more robust than superficial change.

Once more: how do projects fail?

Projects fail if you don’t get buy-in. This doesn’t need to be unanimous support and a company pep rally celebrating how great your project is. It can be as simple as sending a series of emails ahead of time letting people know what’s coming, when changes will happen, and what they need to do. A little consideration goes a long way.

Projects fail when large, disruptive projects, even ones for the benefit of security, are pushed into being without considering how they affect the people who depend on efficiency or otherwise prefer the previous system for good reason. If people are reluctant, we need to know why and to address that.

Possibly worst of all, projects fail when they’re rotted from the inside by contempt for users and other affected parties by those in power. All of your future efforts will be diminished if the people you’re supposed to be serving can tell you think you’re smarter than them, better than them, or a greater authority on their own lives. People hate this and remember it, and you will face increased friction forever if you get up to this kind of thing. So don’t.

How can we fix this?

Unless you’re Ben Affleck and company, we can’t stop a meteor on a collision course with earth, but there are other levers we can pull to make our projects go smoothly.

Going back to story time: Breanne’s anti-phishing presentation had some things to learn. It couldn’t be too long: people are busy. It needed a short, memorable set of guidelines. It needed to relate causes to effects: “If we don’t present a unified front in not answering phishing texts and looking more carefully at emails, we’ll keep getting hit, and that will have an effect on the business.” It needed to be informed by technical needs but without including too much jargon, because most people who needed anti-phishing guidelines didn’t need to know about how spam emails sneak by Gmail’s spam-targeting AI. Technical curiosity is worth indulging, but it was better as a leave-behind piece for those who want it, leaving the rest of the audience free to go back to their regular work.

Amy’s SF.gov migration is still a work in progress. A lot of their work in 2023 has been around evolving practices and platforms to help their users experience digital transformation. Here are some of their guidelines for that work:

Consider accessibility and cultural differences.
Amy is working on the accessibility of the artifacts she creates and becoming more flexible with the formats and platforms she puts them into. City government is a Microsoft house, and it’s not her team’s preferred set of tools, but many of their users are wary of anything else or simply aren’t allowed to use other products.

Emphasize common ground.
Amy’s background is outside of tech. When she talks with users, she leads with her government background, which allows her to show users their similarities. She makes it a point to visibly not understand things and ask questions, to normalize those behaviors to make others more comfortable.

Say things the easy way.
On SF.gov, they write in plain language, but it can be hard to remember to do that with their users too. If what you need to say is written down, put it into hemingwayapp.com and try to lower the reading level using their suggestions.

Avoid tech jargon and business jargon.
When you need to introduce a technical term, level-set by saying “do you already know the word ____?” Give them a chance to say yes or no, then explain it if they don’t.

Use nonjudgmental language.
Rather than calling something “bad” or another derogatory word, explain the pros and cons. Try to be precise about what you’re pointing out so that you are not expressing judgment. That goes tenfold for anything that your user currently uses or wants.

Speak at a comfortable pace for your audience.
Watch for facial expressions that look worried, like wrinkled foreheads: that could mean people are concentrating hard to keep up with you.

Maintain neutral facial expressions.
Amy does not laugh and does not smile when people ask tech questions. Every question is valid, and every question, especially a very basic or exceptionally strange question, gives you information about what that person needs to know from you.

We sometimes think a smile will be reassuring, but consider that even a reassuring smile can convey “I know the answer and you do not”. As we mentioned earlier, you do not want to give even the appearance of thinking of yourself as smarter than the other person, because that is extremely sensitive and it will be clocked.

Listen actively.
Repeat back what you’re hearing. Paraphrase, but not with tech jargon, because that will sound like you’re correcting them.

Celebrate mastery.
The first time Amy ever felt confident, good, and proud when using a tech tool was in an intermediate Excel class. Amy is not an Excel master, but “confident, good, and proud” is the feeling of mastery. When Amy sees a user experiencing a feeling of mastery, she parties with them, no matter how small a thing it’s for. Amy saw a user feeling mastery because she learned the keyboard shortcut for paste. They celebrated her success as a team, and they’re still a team.

Question your assumptions about your users.
A little humility truly does go a long way. When you don’t understand what your users are doing or why they want what they want, you could try the five whys, because asking why five times can lead you to the real problem.
Example: a tech product that library bought, failed to adopt, and canceled within a year.

  • Why did we fail to adopt the product? Staff never made it part of their workflow.
  • Why was that? They had trouble training and onboarding everyone.
  • Why was that? Rolling out new software to busy front-line staff is very hard, and they did not set aside enough of everyone’s time to make the training stick.
  • Why was that? The software wasn’t seen as enough of a priority to take people away from public service.
  • And why was that? Well, one person in leadership signed a contract for the software despite every other person saying it wasn’t a good fit for the library.

Answer: lack of buy-in.

Try it out and find a version of it that works for you. Sometimes it’s a good tool to use when you find yourself feeling annoyed by the users you’re meant to be serving.

Ask like a librarian: use reference interviews.
Amy was a librarian for 17 years and wants to teach you about reference interviews. When you go to a library and say can you help me find a book, that is the beginning of your reference interview. In this interaction, a trained librarian is assessing:

  • The words you chose
  • Whether there was hesitation in your voice
  • Your body language
  • Tone
  • The way you approached them
  • What else you are holding in your hands 
  • And more

A secret of reference interviews: the cover isn’t blue. Amy has been surprised, in the tech world, by how literally people take requests from users. There’s a librarian adage about “the book with the blue cover.” Asking for things inaccurately is expected behavior. Not because people are lying or not smart, but because this is the way human brains work: imprecisely. 

A good reference interview helps a person form their question. Try tangential lines of questioning. When you’re looking for the book with the blue cover, it’s tempting to keep asking about the cover. Is there a picture? Light blue or dark blue? 

Don’t bother. The cover is irrelevant. 

Ask questions about something else. When did you read the book? In the 70s? Last year? Was it a book you were assigned in school? Did you get it at a Scholastic book fair? Was the main character an animal or a person? Amy uses a mix of broad and specific questions to try to snag on some detail that triggers a new, meaningful memory. The same technique can work for zeroing in on product requirements. 

Learn user vocabulary and motivations.
Every question is valid. A question might sound silly to you, but it is not silly to the person asking. If you are a knowledge authority figure, chances are they had to screw up their courage to ask you.


Keep a neutral facial expression.
We won’t tell you not to smile, but remember that a smile can read as condescending.

Get to know their vocabulary, and use it.
People often adapt their language in ways they assume make sense to us when they ask questions. They use vocabulary they’ve heard people in our position use. They focus on a detail that’s related to what they want because they associate that detail with techy people.

A common example of this in tech is a user who insists on a particular feature, but has an inaccurate understanding of what it does. They know what outcome they want, but ask for a thing that won’t get them that outcome. Why? Because they are trying to communicate the outcome to you in what they assume is your language. Repeating what users said and paraphrasing, in addition to being good active listening, gives them a chance to edit and refine their ask. Listen to the vocabulary your partners use and use it too.

Be wary of contempt.
If you work with anyone who expresses contempt for your user, be very wary of them. All users are beautiful, but some are harder to reach. If you don’t like working with users and find yourself disdainful of their needs, consider that in your career path. You have more than one kind of user and they are all important.

Tie directives to outcomes your users value.
They may not be the same outcomes you value! Find common fears and desires among the users you’re meant to serve and appeal to them, ideally leaning more on the desires than the fears. It’s better to only reference the scarier things (“If we don’t complete this initiative, we’ll have our government certification pulled, and we will all be unemployed”) only with people who know you well. Instead, try invoking more immediate stakes. Breanne likes “I don’t want you to get a phone call at 3 AM if something goes wrong, because everyone understands that.

Takeaways

  • Work to understand actual motivations behind resistance.
  • Learn to speak the user’s language: vocabulary, accessibility, and technological knowledge.
  • No dictates without relating them to your user’s actual needs and priorities.
  • Try reference interview secrets to understand the bigger picture.
  • Helping people to master something important/lucrative will get them on your side.

Further reading

Diana Initiative 2022: How to Become a Security Partner (and Why You Should)

Hey, Diana Initiative attendees and other interested parties! This is the blog post component of a talk I did on 11 August 2022 at the Diana Initiative in Las Vegas.

I wrote in greater detail about my specific security partner job as part of the product security team at Gusto here.

I compiled some security partner job descriptions here. As I am still getting LinkedIn updates on the search for “security partner,” I may add more to this doc in the future, though that will be dependent on LinkedIn sending me an actual security partner job someday, which has yet to happen.

There are chapters in Reinventing Cybersecurity from two security partners. It’s a free download, and I’m really proud of both my contribution to this book and the project as a whole. You can also order it for an amazingly low price from Amazon.

My BSides SF talk video is here, and a written version is here. This is not a talk specifically about security partnership, but what I talk about in it comes up in my job every day.

Netflix has written about security partner work, and both part one and part two are well worth reading.

And if you have further questions about security partner roles or anything else I covered in this presentation (slides here), come say hello to me on Twitter, and I’ll answer all the questions I can.

BSides SF 2022: Read the Fantastic Manual

This is the blog version of my BSides SF 2022 talk, “Read the Fantastic Manual: Writing Security Docs People Will Actually Read”. I’ll embed video here when it’s available. If you came here from my conference talk: hello! I’m glad you’re here.

Let’s talk documents. Every company needs them, and security in particular needs to be able to effectively educate at scale. We’re going to look at the ins and outs of creating (or reviving) a document repository that explains security needs to the rest of your company, though most of what’s discussed here is useful for anyone who needs to relay information to anyone else. What I’m describing is best for companies of up to perhaps 4,000 people and not so much your very large companies, which often have comms teams for this kind of work. The strategies here help people doing that work too, but as those companies can often afford a content strategist or two, it’s sometimes less critical for a security engineer to put the time into effective doc creation.

How do we figure out how to write?

Before we throw a bunch of work into this, we need a strategy. We need to work to understand what needs to be written and what needs we’re looking to meet with our documentation.

Learn from the present

The people we’re hoping to serve with documentation can give us a good idea of where to start. If it’s safe for you to be back in the office, this is a great time to eavesdrop. If you’re still remote, it’s time to read through the chat and probably a lot of it. If you don’t have a channel called something like #ask-security, a place where it’s clear people can ask questions about security and get answers, it’s time to make one. Not only does this tell people that you want to answer their questions, it also gives you a primary place to see what concepts and processes people are struggling with. What questions come up over and over? What security-adjacent work is making development proceed slowly or unsafely? See what people ask, and you’ll get your first ideas of what documents could benefit the people around you.

We also need to get a sense of what’s going on with the existing docs. In addition to understanding what’s already out there, check the views, opens, or whatever metric your document repository of choice offers. It can be gutting to see that some important doc has been opened all of 13 times (particularly if you wrote it!), but it’s information that we need. We also need to ask people what they think of the existing docs: are they written the right way? Do they cover the right material? What gaps are there? Getting honest feedback can be tricky in some work cultures, but we’ll talk about ways to work with that later.

Learn from the past

We can learn a lot about what to do and what to avoid by talking to the person responsible for existing documentation (though that might, of course, be you). If that person is no longer at the company, try to find someone who knows why that person is no longer there. If there are predictable frustrations, find out about them and figure out some ways to prevent them affecting your work.

Go back through chat histories, old all-hands decks, or other places the past lives and figure out what docs people have historically shared. What’s commonly passed around, and what does it seem like people are unaware of? If something important isn’t mentioned or looked at much, why do you think people might be unaware it exists? Keep at this until you have a good sense of behavior of olde and how well they’ve done.

Pace yourself

Documentation is a marathon, but there are things to keep in mind that can keep you going. For instance, one perk of a nonexistent set of documents (or docs that haven’t gotten the attention they need for a while) is that people will notice when you put some effort into it. A little love applied to something that’s been known to be subpar for a while will be appreciated.

Also reassuring? Perfection is impossible in this work. No, I swear that’s actually awesome. We’re looking for good enough, better, functional, and improved – not perfect. This frees us to aim to do good work without smothering it with an impossible quest.

And while you’re diving into this work and figuring out where your efforts are most needed, try to work on just a couple of questions at a time. We can’t boil nor drink the ocean, and we certainly can’t address all of an engineering org’s or company’s security questions at once. Looking at just a couple questions at a time – questions like “How does this company want to deal with phishing threats?” and “What are some simple ways to boost broad security awareness?” – keeps you from being overburdened and gives you the chance to find interesting connections that being single-minded about this work can obscure.

In short: how do we know what to write?

  • Listen to conversations and read the chat to see what people know and need to know
  • Check stats on existing documents so you don’t duplicate work
  • Ask around to see what’s working with the existing documentation
  • Work to answer two questions at a time as you proceed through this work

What makes a good document?

Bought a triple cat bowl holder so when they're eating the mogs can line up perfe ,,,
Oh never mind.
(Photo of a wooden frame holding three bowls of kibble, followed by a photo of three cats eating from it, all overlapping each other instead of lined up one in front of each bowl.)

It’s a good document when it enables people to carry out the behavior you want. It’s not uncommon for docs to be like this well-intended and very cute cat feeding trough: you thought you were making one thing, but your users thought you made something else. 

How do we build a better document?

First, we need to be thorough. There’s a tendency in some tech documentation to make a lot of unquestioned assumptions about the reader, leaving out vital details, specifics the writer deemed unnecessary, or other key pieces of information that can derail the reader as they go read something else to try to fill in the gaps.

Instead, add whatever you know about the subject. If you know specific commands, file locations, steps, links to background information, or anything else that can help the reader accomplish their goal, just include it. We want this to be a complete resource for finishing the task the doc is meant to illuminate. What if your entire team got food poisoning or, you know, Covid? We’re writing for the survivor. We’re writing to help Sigourney Weaver get to safety in Alien. Do it for Ripley.

Docs should also serve people having a bad day, which is not rarely the case when they’re trying to figure out something difficult that stands between them and finishing their work. Aim to write a friendly document – or, at the very least, not an actively adversarial one. We want the user to believe – to know – that the doc’s author is on their side.

To that end, the doc should speak the user’s language, and it should not use a different vocabulary than the intended reader. It’s fine and normal to introduce potentially new terms, but we need to explain them. This can mean tooltips, explaining acronyms before relying solely on abbreviations, and linking to explanations.

To best accomplish this, we need to have a specific audience in mind. When you’re writing a doc, complete this sentence before you start writing:

I am writing this doc for $audience to describe $information for $purpose.

Some examples:

I am writing this doc for front-end engineers to describe our preferred form libraries for XSS prevention.

I am writing this doc for everyone to describe common phishing techniques for thwarting recent spear phishing attacks.

I am writing this doc for that executive to describe why we need this tool for upcoming budget planning.

(We all know that executive, don’t we?)

Each of these gives you a sense of the reader’s context, what the content will be, and a definition of success or failure. Once you know this (and I suggest pasting your completed sentence at the top of your nascent doc for guidance you start to shape it), you’re ready to write. Once you’re finished, ask someone who’s a member of your stated audience to review it to make sure it does what you think it should.

How do we spot a good document?

  • It includes everything we know that a reader might need to know about the subject or process
  • It’s friendly and speaks the reader’s language
  • It has an intended audience, information to be conveyed, and a purpose for existing

How should we write it?

Alas, for you, I have the perpetual security answer:

A vector graphic of a Magic 8-Ball with a blue triangle reading "IT DEPENDS"

There are so many different kinds of documents we may be called to write. Here are a few of them:

  • Process descriptions
  • Reference
  • Architectural decision records
  • Where the bodies are hidden
  • How to recover from an emergency
  • What teams do
  • How to reach people
  • Lunch menus
  • On-call schedules
  • etc. etc. etc. etc.

Your purpose determines what you need to write.

Let’s look more closely at a few common possibilities.

Playbooks

You had a thing happen; here’s what to do. An example: you need to record a vulnerability. You need to get PII out of the logs. You need to create a new shard for your database.

These tend to be either frequently referred to or rarely looked at but still extremely critical. List out steps, be thorough, and expect to refine it a few times, especially if you’re writing for people who are just learning this stuff.

Other process

This can describe what a team does, how to do something essential but less pressing than what’s described in a playbook, or any other how-to.

This also includes process flowcharts, which I will confess I often hate. It’s not the flowcharts’ fault; it’s that I’ve most often had them lobbed at me by teams with a stronger interest in getting me to go away than in helping anyone. If it’s a team’s first offering when you ask for more information, I become immediately leery. They aren’t as accessible as people think, especially if you’re throwing out a rat’s nest of overlapping swim lanes.

A very involved flow chart with a ton of text, including groupings that say "ANSF TACTICAL," "POPULATION CONDITIONS AND BELIEFS," and "POPULAR SUPPORT." It is too tangled to get any real information from. You are not missing anything here.

I googled “terrible flow chart,” and the internet delivered. It’s funny, but it’s also not far off from experiences I’ve had with these from unhelpful teams. We can do better.

If you must use one of these – and yes, there are legitimate reasons for it, though fewer than some people think – ask someone outside of your team to review it before you throw it at someone who depends on it to get their work done. If you’ve only run it by people on your team, there will be opaque parts that need to be fixed before you call it done.

A grab bag of writing advice

Keep it friendly; informal is probably best, because security can be scary.

Remember, your doc may be someone’s best hope on a really bad day of work. We want to be friendly, and being informal where you can can pay dividends. Remember too that lots of people have had bad experiences with people on security teams. Getting to be the friendly helper on a bad day is an incredible opportunity to make the future different from the past.

Explain all abbreviations and acronyms.

Tech runs on acronyms, but we need to anchor them. When you first use an acronym, abbreviation, or other specialized vocabulary, explain it in some way. Write out what the abbreviation represents, and provide a resource to explain other terms the reader may not know yet.

Link generously, both internally and externally.

Linking to other internal docs is a great way to both show the user what other resources are available and make what they need available without them searching for it. Linking externally, to the resources you inevitably come across and rely on when creating a doc, gives the user who wants to learn more resources you know are good without making them search and vet on their own. You already did the work; let someone else benefit.

Make it clear what person is behind a doc; someone will need to know.

Sometimes, documents can seem to exist in a void, but a human always started it. Make it clear what person or team wrote the thing the user is reading. It humanizes things, and it makes it easier for them to reach out to ask for more, request changes, or just say thank you.

Chunk information by complexity with headers to enable skimming.

People read in lots of different ways, and we can support them by sprinkling a little information architecture on. Use headers liberally, bold critical information, and enable skimming wherever you can. Because we can…

Never assume anyone will read the whole doc.

I know, I know. I tend to read entire pieces of documentation. Maybe you do too. Most people, however, do not. Explain things concisely; some people may read just the paragraph they need, and that’s totally ok. In fact…

A doc must work in isolation from any supporting material.

We sometimes need to write lovingly crafted suites of documentation. I love that for us. However, we can’t assume anyone will actually consume all of it. Instead, if you’re detailing a process, assume someone will never follow the internal links you crafted or follow the divine path of information you’ve laid for them. Never make it necessary to read across three or four docs to get someone through a problem if you can help it.

If in doubt: gifs and animal pictures keep people’s attention.

A grumpy tortie cat sitting on carpet, her front legs crossed, looking at the camera as if she thinks you've done something really dumb


It works for me, anyway. If you need to hold people’s attention through something dry, throw in fun things. Surprise can keep people engaged for longer.

Include how to give feedback in each doc: survey, chat, comments, etc.

If people want to ask for more, offer a correction, or compliment you, we want them to be able to find us. Have something at the bottom that tells them where to provide feedback. This is, you see, a gift.

How do we write our docs?

  • Know what kind of doc you’re writing.
  • Remember that your job is to explain AND be approachable.
  • Act like each doc must exist in isolation.
  • Make it skim-friendly for people in a hurry.

Where should our completed docs live?

First, we have to find all the options. There seem to always be at least two official doc repositories; usually one is Google Drive, but not always. It’s our job to figure out which one to use. Both may be in play (or more!), but usually only one is the right answer.

We also have to keep an eye on chat and what gets said there. Chat is not documentation, but it is an information repository. Watch how links are shared, what information is posted, and how people react to it.

When you choose your one true doc repository, figure out how it works. For instance, how are docs marked as drafts vs. completed?

We can learn a lot of this through asking questions and observing. We can also learn by choosing incorrectly and getting corrected (and probably will). This is normal and not a problem, so long as we absorb the information given to us.

We also have to understand how people look for information. Do they horde and share links, do they roll the dice with search, or do they graze until they find what they need?

As we figure this out, it’s important to remember that there is no perfect solution here. No, seriously, this is actually great. Documentation can be more or less effective, but it’s unavoidably a subjective exercise. We can aim for better, but we’ll never achieve perfection. And if you’re tempted to suggest importing your immaculate documentation storage solution from your previous job, thinking that will surely solve all the problems you’re facing, I invite you to remember the joke about how you now have 24 Javascript frameworks.

Instead, we’re looking for more like a 75 percent solution to our problem. You know how Cs get degrees? This is that situation, except our C is consistency. Here’s a secret: mostly it doesn’t matter where our documents live, so long as people know where to look. Whatever you do, just stick with it until you have a good reason to change course. Then iterate and communicate what you’re doing to the people who depend on you. No, really, that’s it. Yes, there are some really painful places to put documentation out there, but if we’re dealing with those, we’re probably fine so long as we stay the course.

How do we figure out where to put the docs?

  • Figure out all the places docs can live
  • Figure out where people most often search
  • Figure out how they search
  • Match it when you can, especially at first
  • Stay the course and iterate deliberately and clearly

Circulating our work in the company

I know some people hide in documentation to avoid humans, but I regret to inform you that we need to talk to people so they know our work exists. I promise it can be rewarding, though. Here’s how.

Chat, especially in this more remote era, is one of our best tools. As before, we want to watch for opportunities to provide links to resources we’ve written at just the right time. Now, we’re looking for when questions come up, even if they’re not directly phrased as questions. We also want to make announcements when we’ve created something new.

Meetings, though painful in too great a number, remain a great tool for socializing our work. I like team meetings for this, especially if I’ve written something because of a need surfaced by a particular team. It’s genuinely fun when you know someone’s struggled with something and their life will become easier because of your work.

All-hands and other larger, higher-level meetings are great for this too. Less targeted, they make up for the broader aim with sheer numbers. Better still is if you have a regular segment about new documentation so that people know to expect these things monthly or quarterly.

A word on bribes and incentives

It can be enough to make written resources people need, in a way that works for them, provided in the right medium for the way they glean new information. That’s great. However, if we want to influence behavior, bribes and incentives are very useful.

Stickers

Stickers are a classic for a reason. Why do we tech types go apeshit for these? I don’t fully know. Not everyone who loves them was a Lisa Frank freak as a kid like I was, so I can’t account for it, but it works very well, especially if you do limited-edition ones for an event or other achievement. They’re inexpensive but fun, and people will do things you want of them if you provide stickers. I’ve gotten them from Gibson Print and Sticker Giant. My current company gets nice ones from Printfection.

Badges

People like flair. Give them flair. You may need to make friends with your company’s design department, but I swear this is actually a fun errand.

Rewarding ambassadors

There are lots of ways to do this. Having an organized kudos program where people nominate each other for doing good security work is one way; complimenting someone to their manager is another. Call out good work when you see it.

Coffee

Most offices have free coffee, and a lot of them (especially in the Bay Area) have legitimately good coffee. Despite all this, the $5 Starbucks card still feels special. Maybe it’s the permission to go get something covered in whip, I don’t know. But it works.

The common thread here is that effective incentives don’t need to be expensive. These are not bonuses; those are done by another team and likely above your paygrade. Instead, the goal of this kind of treat is to make a compliment more tangible. We want people to know we see and appreciate their efforts, which makes them more likely to do it the next time they get the opportunity.

Get comfortable repeating yourself

We rarely learn something on the first encounter with it. Instead, the journey to full documentation awareness for any given user is more likely chat + seeing it in the docs repo + a mention in a meeting + a link provided by a teammate at just the right time. This is to be expected; it’s normal and fine.

If you feel uncomfortable repeating yourself (as most non-tedious people do), make a calendar for when you’ll announce things in meetings, chat, and other places. This way, when you feel like you’ve said it too many times, you can look at your schedule and know exactly how many times you’ve said it. This also makes (yes) iterating easier, because you can see at a glance what you’ve done and decide whether more, less, or different would be a good idea.

How do we get our work out there?

  • Promote where the people are: chat, meetings, all-hands.
  • You are going to have to repeat yourself.
  • You are going to have to repeat yourself.
  • Bribe people toward the behavior you want.

How do we know if our efforts are working?

We have to evaluate what we’ve done.

We can ask people via regular check-ins and one-on-ones with people who gave us pointers on where to put our efforts. We can do surveys, though some people have had bad experiences with them. If you keep them short, to the point, and connected to a personal appeal, you’ll likely get some good information. We can look at metrics, going back to the views and opens we discussed earlier.

We can also find a goal person. This can be a persona, but if you can, try to find a real person whose needs you’re looking to meet. Once you’ve made the resources they need, you can find another goal person and start working to make other people’s lives easier.

We also need to review our body of work regularly. I like quarterly for this, but if that sounds punishing, biannual can work too, so long as you don’t skip it. We need to know what we’ve published, how it’s being used, and what it tells us about the future of our work.

It’s also a great time to review feedback, which ideally we’ve been collecting all along via our feedback call to action in every doc we’ve published. If you haven’t done that already, this is a great time to do it.

A note on soliciting feedback

If we are very lucky, we work at a company that prioritizes psychological safety. However, if you don’t, you may need to do some extra work to get feedback from people. At psychologically unsafe companies, what you know to be a friendly request for an opinion can sound like a trap to people who’ve gotten used to living in these cultures of no. Honest feedback depends on people feeling safe enough to be honest. Alas, if you’re in charge of security documentation, you probably don’t have the stature in the company to make a culture-wide change of this kind.

However, you can make a psychological safety bubble around you and your team. Like the rest of this work, it takes steady effort across time, but it can be done. When you ask for feedback, make it clear that you really want to hear the tough stuff, because you really want to improve things. Emphasize that you’re asking for opinions on a document, not on you or your team. And, when you hear difficult things, take that feedback and use it.

Across time, people will begin to trust you (if they didn’t at first) and be more immediately forthcoming; seeing their needs addressed warms people up, even if they’ve had reason to be withdrawn.

This can be tough to do, especially if you’re not used to it, but this work can’t go without honest reactions and opinions.

What next?

We get to start the next cycle. Ask around, do your research, determine new needs, and keep mapping the future.

How do we measure success?

  • Do surveys, ask people
  • Review regularly
  • Create a psychological safety bubble if wider culture doesn’t support it
  • Start the next cycle!

Keep with this cycle of listen, learn, implement, and iterate, and across time even the worst problems of documentation will begin to lessen. As documentarians, it’s our job to serve our colleagues. It’s tough work, but it scales better than almost anything we can do, and it’s available even when we’re not there to answer questions. I find it to always be worth the work, and I hope you think so too.

Hello, AWP 2022!

If you’re here, it might be because we talked and I gave you one of my shiny business cards. Here are some useful resources for knowing more about what I’m doing.

  • The posts on this blog are typically about my work as an SRE and then as a security engineer. Read on if you want, but there’s no fiction here (except the one about computers being a good idea; I assure you they are not).
  • I write about travel at Deviation Obligatoire.
  • My monthly writing newsletter is Optional But Encouraged. You can read my previous dispatches here.
  • I mostly write novels these days, which are in various phases of revision. I’m planning to query later this year. I write what I consider regular fiction (meaning no magic or spaceships) but also a fair amount of fantasy (yes magic) and sci-fi (yes spaceships).
  • I’ve written zines for a lot of years too. You can get glimpses of them here.
  • I’m on Twitter more than is ideal for, well, anyone, but there it is.
  • I ate my first cheesesteak this week, which represents more meat than I’ve probably eaten in the previous seven or eight years. Out of character for me, but it felt stupid to come all the way here and not eat the food of the people. I paired it with a Yuengling and a friend’s good company and felt fairly content.

Anyway, hi! I probably enjoyed talking to you. You know how to reach me. I hope AWP is treating you well (and me too, for that matter).

PancakesCon 3: CLI Server Surveillance and Writing Your First Novel

This is the blog component of my PancakesCon 3 talk on 16 January 2022. Video to come.

My monthly writing newsletter lives here.

Part one: CLI Server Surveillance

This part of the talk concentrates on CLI tools that enable OSINT: Open Source INTelligence gathering. This is when you use publicly available information to learn more about a site, company, network, or other entity with the goal of getting a more complete picture than the individually released pieces of information were probably intended to create. The commands and how I’ll suggest using them are all legal to use, but I’ve noted certain applications here and there that cross that line. If in doubt, remember that typically one person using one command at a time is fine, but scripting something that hammers a target, particularly without permission, is a bad idea for lots of reasons.

I primarly work on a Mac, so the commands below are ones I’ve tested out there. Usually (but not always!) the commands and flags are the same on Linux. Try them out and learn some stuff about your system. If all else fails, add the -h flag to the command. You’ll be fine. A note on systems and assumptionsI’m Mac based, but the commands and flags are often the same on Linux… but not always. Try them out, learn some stuff about your system. If all else fails, -h! I’ve noted Windows equivalents wherever possible, but the specific flags and placement of parameters will be yours to discover.

You have to be specific to be terrific, so let’s establish what we want to learn about a website or server with these tools. We’ll be looking to answer these:

  • What’s this server doing?
  • What’s this server running?
  • What ports are open?
  • Who owns this server? 
  • Who is the domain registered with?
  • What nameservers hold the records for this domain?

Why do we want to know these things about a server? I’ve most often done this work in the early stages of researching a target for my job – with the specific, written permission of the owners of the servers. In my last job, I sometimes did timeboxed pen testing for potential vendors, and I would start out in a big-picture way: what can I figure out about how this site runs from the outside? Without setting off alarms on the side of the folks who own the servers and without being disruptive or possibly breaking the law, what can I learn? 

The tools I’m about to describe are either available on a fresh OS install or are so standard that the steps to install are well documented. I had to send a few texts to my boyfriend asking him to run commands on his normal-person Mac, as I couldn’t remember if I’d installed some of these myself or if they came standard. There’s a blog post version of this talk, and it’ll include links to installation docs if needed. 

The tools

ping

ping answers questions like whether the server is up and what the server’s IP is, based on providing the domain. It uses ICMP – internet control message protocol. It’s a protocol like HTTP, HTTPS, and FTP (plus a bunch of others), although it’s part of the internet layer – those other three are application layer.

Traceroute, described below, uses ICMP too, and it’s a common protocol for diagnostic stuff. It’s an IPv4 tool, though there’s an IPv6 version too: ping6. It’s used for diagnostics because you get an echo reply if the host you’re pointing it toward is there. Implementations of ping vary a little: payload size, TTL, time to wait for a response. The echo reply is sometimes called “pong.” :)

My most common use of ping is when I realize my internet connection is being a little funky, and I want a quick sense of how broken stuff might be. Nice quick diagnostic tool when you’re trying to figure out where to start with troubleshooting. Is it zero percent packet loss or not? Beyond that, it’s oOne way to get an IP from a URL or quietly check if a server is live. It’s also possible to change the origin of the ping (though that rules out getting a response, of course).

Ping is a lovely, ordinary tool that’s been put to a couple of good, weird uses (which you should not try at home. A ping flood is an avalanche of ping requests that don’t wait for a response, with the goal of overwhelming the target and creating a denail of service. Another nefarious use of innocent old ping is the (wait for it) PING OF DEATH. Instead of a ton of individual ping requests, ONE GIANT PING would be sent, because older systems would crash (or experience buffer overflow) if you sent a packet bigger than 65,535 bytes (the size of a correctly formed IPv4 packet). This is mostly a thing in the past – firewalls and other mitigations make these generally a nonissue now. However, it was exploited in 2013 against a version of Windows! It’s patched now, but 2013!

Some sample commands

ping $url or $ip

ping -a $url or $ip (dings with each response, because you need that)

Stop after five pings: ping -c 5 $url

Intervals between: ping -i $seconds $url

It’s the same on windows: cmd > ping.

curl

curl – short for “client URL” – debuted in 1996, so it’s a more recent arrival. (Ping was born in December 1983.) It can answer questions like:

  • What’s this server serving?
  • What server software is it running, and what version?
  • What HTTP response code does it return to various requests?

I reach for it when I either want to see something from a server but not in a browser, or I want to see information from the server that typically doesn’t show in a browser.

Some sample commands

Plain old curl $url if I’m curious what’s behind phishing URLs in texts

curl -I $url for headers (case sensitive!!!), which is the same as curl --head $url

Add -v for a little more detail about certs, TLS handshakes, IP addresses, ports, and more nuts and bolts of what’s going on

And yes, it works in Windows too: cmd > curl

When I read through the headers, I’m looking for indications of server software, and most especially for outdated versions of that server software.

nmap 

nmap is short for network mapper. You might have to install this one, but the instructions are clear, or: brew install nmap. nmap can answer questions like:

  • What ports are open that I could possibly connect to?
  • What software is running? (With a little extra research)

nmap does lots of different kinds of scans. You can go deep on this one. It can also be very noisy if you’re trying to scan things quietly (but legally!). Hitting all the ports can draw attention you don’t want, so be strategic. It uses ping but can also use TCP, UDP, and other options as you do your recon.

nmap is one of those tools where you can use five percent of it and still do a ton. There are amazing depths to be found – it rewards putting time into it. We’re mostly focusing on port scanning here, though, since we’re looking at other people’s servers, not investigating networks we’re already in.

Some sample commands

nmap -p# $url or $ip (can be -p22 to see if SSH is available, or -p0-1023 for reserved ports, or a different selection)

nmap -F $url or $ip for faster, fewer ports than default scan

And yes, it’s available on Windows too.

Yes!

nmap has a ton of resources available, including a guide to port scanning basics and one of many, many broader guides.

netcat

netcat is another networking tool. For our purposes, it can answer questions like:

  • Is this port open?
  • How do I write a GET or other request?

netcat is generally good for networking things; it can be a quick way to see if a port is open. Beyond that, it’s a neat tool to put you right in the middle of how requests work when you send them to a server. Connect to a server and then send a get request for /index.html. You can get some visibility into networks with it, and also it’s just fun to mess with. What can you GET? 

Some sample commands

netcat -vz $url $port to see if the port is open (just a little quicker than nmap sometimes)

-z for zero I/O (so just scanning)

-v or -vv for verbosity, of course

-T for telnet!

-t for TCP mode

netcat -vt google.com 80 (verbose; tcp) and then GET /index.html

Try ncat for Windows.

traceroute (or tracert on Windows)

traceroute can answer questions like:

  • What equipment sits between me and this server?
  • Who hosts this site?
  • What does a request’s journey across the internet look like?

If you’re also a networking nerd, traceroute is really interesting. It’s another diagnostic program in the tradition of ping. It shows possible routes between your computer and the server you’re trying to reach, plus transit delays across a network. Ping only does round-trip times with success or failure; traceroute shows you everything in between. It’s a fun way to get to know some networking stuff, like the private IPs for your home devices, what it looks like when your request is making its way out of your ISP’s domain and when it hits the open internet, and when the request resolves. 

The route is probably going to be at least a little different every time so run it multiple times to see a little more of the world. You’ll probably get some output that just has asterisks. This can mean timeouts, firewalls that prevent data being sent back to you, or equipment configured to reject ICMP packets (which are sometimes not prioritized)

Generally I’ve used this without flags and just watched the output. It was a favorite tool in the networking class I took at the Bradfield School of Computer Science. The various flags let you control TCP vs. ICMP for your requests, the number of probe packets per hop (defaulting to three), wait times, ports, TTLs, and stuff like that. 

Some sample commands

traceroute $url or $ip gets you a lot of what you’ll want here. Tinker with flags if you’re curious, though; you’ll learn something cool (even if it’s “don’t do that again.”)

It defaults to IPv4, but you can specify either with the -4 or -6 flags

whois

whois can answer questions like:

  • What are the (possible) nameservers for this domain?
  • Who owns this block of IPs (maybe)?
  • Who’s the (likely) registrar for this domain?

Note the qualifiers. I’ll explain that in a minute.

whois works with both URLs and IPs as parameters. IPs are extra fun, because you might get details about a CIDR range, which is a block of IPs designated using a 0.0.0.0/0 syntax. Check out cidr.xyz to learn more about what that means.  

For URLs, you get nameserver details and other stuff IANA (the Internet Assigned Numbers Authority) stores. It can begin to tell you who owns a domain or IP, where their information is stored, where the site is hosted, where the domains are registered, and other technical details. There may be contact information in here, but most often it’s business details these days.

whois gets its information from a whois server – think a database of “we expect things to look like this,” which is (as we know) not the same as “things look like this in practice.” Sometimes it’s useful to see what things were once like or were expected to be like. In OSINT, additional information, even if it’s no longer accurate, can still provide helpful detail.

Some sample commands

whois $url

whois 8.8.8.8 (or any IP)

The whois command is the same on Windows.

dig

dig is short for domain information groper. It can answer questions like:

  • What’s lurking in the DNS records for this domain?
  • What do the domain’s actual nameservers have to say about all this right now?

dig provides DNS records for a domains, which can include some fun, weird stuff, especially in the TXT records. Information comes from the domain’s nameservers. Since it uses DNS information, it pulls from the same information used to resolve a domain put into a browser’s address bar. DNS is the authoritative source here. If in doubt, go with dig.

Some sample commands

dig $url

dig +trace $url to see how it does the search 

dig google.com TXT (or MX or CNAME or any other type of record) to poke around some more. TXT records are my favorite because they’ve been used for so many weird things: SPF (sender policy framework) records to reduce spam and fraud via email, but also marketing software they might use and other sites that require verification that way. 

If there are many IPs that resolve for a domain, it’ll show you four.

dig is available on Windows if you install BIND. You can also try nslookup.

host does something similar but only returns a little information: IP if you provide the domain, domain name pointer if you give it an IP. If you’re using it in a script and want to just use a little bit of the output, maybe processing with awk, this can be simpler and more predictable to process.

A few other useful tools for your OSINT 

Wappalyzer offers Chrome and Firefox extensions that tells you the tech stack sites you visit.

Company press releases, articles about the company, and interviews with employees can give you insight too.

Look the company up on GitHub and see what’s public.

Check LinkedIn to get an idea of staffing (especially in the security teams).

Never underestimate viewing the page source (use this at your own risk if you live in Missouri, though).

And clever people have written other libraries and projects that bundle these tools, like this osint Python project.

These are fun and useful ways to learn about tools and computers, get a sense of how networks work, and learn things about a target. Individually, they’re powerful and useful. If you combine these things into scripts that give pretty output that matches your goal and how your brain works, you can do some amazing stuff.

Right! Onto part two of this talk.

Writing your first novel

Our goals here are to learn:

  • How to complete a first draft without feeling terrible all the time
  • What a first draft is and our working definition of a novel

A first draft is the first attempt at telling a long-form story. It is not a final draft; it’s material to be used when you revise later. Your early goals are to get from A to B, figure out some of who your characters are, get a sense of what’s important to you in this story, and give yourself something to work with in revisions. Nothing more.

Most likely, though, you’ll manage more than that as you go along. You might find that you’ve created whole people, for instance. 

When I say “a novel,” I’m referring to a work of fiction that’s likely between 70k and 100k words (though epic fantasy can go longer). It can (and likely will, at some point) incorporate something from your life, if only feelings, but overall, it’s full of stuff you’ve made up.

The great thing about a first draft is that it is, by definition, not a final draft. When the first draft is done, the arrangement of all the islands in your little imagination archipelago is probably not final, but they’re there, and you’ve written the whole thing down, for your definition of “the whole thing.”

Revisions come later and are their own very large subject. Today, I’m trying to get you from “I’ve written zero novels” to “I feel like I could totally write a first-draft novel.”

Because I bet you can if you want to.

Why write a novel at all?

For some of us, it’s really fun! Maybe you have a character/scene/place in your head that you want to put into words, maybe in support of a goal. Or perhaps you have an experience you want to write about but want to fictionalize it a little or otherwise just don’t want to write a memoir.

Or maybe you’re me, and armless escapism is really useful sometimes.

What you need to get started

You need to want to do this. This sounds obvious, but a lot of writing advice can get unhelpfully abstract about this point int he process. But seriously, the single most important thing (and the same advice I give people considering getting into tech) is that you have to like this. You can figure out the rest later, but if you like the idea of having written a novel better than the reality, you’re going to have a much harder time.

You need a seed of an idea. A scene is enough. A character you want to follow around and create adventures for is enough. You just need a place to start tugging the thread and trust that the rest can follow. 

How should you write, practically speaking?

Any of these work:

  • Longhand, if you want
  • Google Drive (my choice)
  • A notes app on your phone
  • Git
  • Microsoft Word (seriously; it’s pretty standard among professional writing/publishing types too)
  • Scrivener

Anything that fits how your brain works and lets you sit and write with minimal friction is the right tool. It should mostly invisibly support how you do things.

A little more on first drafts

I know, I know, but the thing is that a lot of lofty “how to write a novel” advice doesn’t emphasize the special and rather wonderful qualities of writing a first draft.

A first draft is:

  • A way of getting from point A to point B in one of many possible ways
  • A great chance to delight yourself
  • A place to push boundaries of the story, because logic and characterization aren’t set yet
  • Probably going to have repetition in it
  • An inelegant first route that can still be super fun to write

A first draft is not:

  • A chance at perfection (that comes maybe later but probably never)
  • Your only chance
  • Required to contain everything that needs to be there (I just changed the name of a major character almost two years after she first entered my head because I realized something didn’t work)
  • Required to be written linearly – it’s totally ok to bounce around, leave placeholders, write yourself notes, etc. I sometimes put BREANNE in places so that I can cmd-F and find the things my past self wanted me to put some time into. 

A first draft is woven cloth, not a finished dress. It’s clay, not a sculpture. Go wild.

Planning

Writers and their ways of planning fall across a spectrum. At one end, we have plotters; at the other, pantsers.

Plotters like to know everywhere the story is going before they start (still usually leaving room for change along the way). This is perfectly valid; lots of plotters start with an outline and then write.

Pantsers write by the seat of their pants. They know some things when they start; they know more when they’re done.

Some people (including me) are in between. I tend to write out the parts I know (this character, this scene, this setting), get about to the halfway point of a draft, and then create an outline to start better organizing the stuff I’m coming up with.

See where you feel comfortable, mostly. Play with both ends of the spectrum. Expect your process to evolve if you choose to keep up with this. 

Mindfulness

Honestly, I think this is probably the most important skill for finishing the first draft of your first novel, more than beautiful sentences and perfect plot twists. You need to be able to keep putting words down with the knowledge that you’re not perfectly nailing it today. But nailing it is not today’s goal; that’s your future self’s problem when you revise. Unlike a lot of “good luck, future self” things, this is a healthy one. There is drafting you and revising you. Today, drafting you is in charge. Revising you will use different skills and priorities.

Our goal, with endless thanks to Anne Lamott, is a shitty first draft. I say this joyfully.

Here’s the great news: your first draft doesn’t have to be good, and odds are it won’t be. We’re not setting records for the marathon, we’re crossing the line in our own time, content enough not to have collapsed on the way.

While making yourself believe your story, you also need to make yourself believe that this will get better – but the thing is, it really will! You can remind yourself that I promised you the work will get better in time, and I mean it: if you choose to keep going with this, it just will.

You’ll also start to learn things about your own writing. Everyone – yes, everyone – has patterns in their writing that, left unchecked, would be undesirable in a final draft. Luckily for us, we’re only writing a first draft, and we don’t have to fix these things! You can ignore them entirely, or you can choose to passively witness them with the goal of addressing them on another day. My characters love to gaze at things pensively before finally saying something out loud. They can do this as much as they wish in a first draft.

The same is true of themes, which are the higher-level “abouts” of a story. A story might be about two people who meet at work and end up on a road trip from San Francisco to Montana, but the theme might be discovery of the true self, the compromises we make in love, or the American dream. Themes are a concern of the revising you in the future. You have the option of totally ignoring it in a first draft. If you want, you can watch and get a sense of what some likely themes are… and then put them away until the time is right.

Remind yourself: perfection is not required this day. Only word count and progress.

How to keep going on this long solo journey

Have a place and a time to do this thing. Yes, routine again. But it helps to not have to do the work of where and when anew every time. 

Bribe yourself. Maybe you go on a walk afterward or text a friend. I don’t love using food for stuff like this, because we don’t need to earn food. But put an incentive on the other side, and routine will come together aster.

Practice acountability, even if it’s just posting about what you’re doing on Twitter. More on this in a minute.

Practice being very nice to yourself. When I write a note about a to-do, I often write “please.” Writing can be punishing. Be soft where you can. 

Celebrate victories. Victories can take lots of forms. Embrace most of them, even and especially the small ones. 

How to know you’re doing actual work

The simplest way to do this is to track your word or page count. This can be very official (I have a spreadsheet) or just by taking a look at things (shortcut in Google Docs: cmd-shift-c). However, this can also be emotionally unhelpful for some of us, so don’t feel like this is how you need to do this. If you suspect this kind of rigorous measurement will psych you out, skip it!

Instead, consider tracking your butt-in-seat-hands-on-keyboard time. And this doesn’t have to mean writing down your minutes. It can just mean that you always sit down to write for a half hour after work. You may write, you may not, but you created the conditions where writing can take place. That can be victory too.

Related: consider finding a routine. Feel happy when you stick to it. I like London Writers’ Hour, Australia edition, which is roughly at lunch in my time zone. On those days, I get to know exactly when I’ll write with no thinking or life rearranging. Awesome.

Set goals for yourself if you want, but keep them gentle at first. Ambition can come later. People don’t start out wanting to climb a mountain; first, they usually climb an indoor wall of low difficulty, and then they work their way up. Writing’s the same. 

Accountability

If you want to do this, figure out if you do best with accountability to yourself or someone else.

I do a mix of bullet journaling and spreadsheets to keep on top of my life and progress on most things in it. It means a lot to me to check certain boxes everyday, and I really dislike missing one. So internal accountability works well for me.

But you may do better with external accountability. In that case, you can try posting on social media when you’ve done your work for the day, texting a friend, or reaching out to writing community. Speaking of…

Community and why it’s useful

Writing’s a lonely journey. It’s nice to have parts of it be less so.

I’ve found community largely through Slacks: Clarion West’s during their summer writing event, dedicated channels in various tech Slacks I’m in. But I’ve also done things on Twitter, and there are sub-Reddits that are super active: r/writing, r/KeepWriting, r/writers, and others dedicated to specific genres are great places to start. When in-person events are going on, local readings, writing classes, and conferences are really useful for this. And writing classes still exist these days – there are tons being held over Zoom, so you can commune remotely and still meet people. Classes held across multiple days are great for this.

Beyond that, keep going, and you’ll get to a point where you need beta readers, which are the first people who read something you’re working on once it gets to a place where it’s ready for someone else. (For a lot of people this is the second, third, or fourth draft.) It’s a great idea to make friends with people who are in a similar place with their work to you; you can learn together, and quite possibly (if you have ambitions with your work) help each other succeed. 

And then: just one damned thing after another

It’s useful to have an idea of where to start and where you’re going (or I feel reassured by that, anyway). But you don’t have to. Write the thing you’re excited by first. Then figure out the next thing you’re excited by and keep going. This gives you momentum, but it’s also good data. If you’re excited to write it, probably someone will be excited to read it. Conversely, if you find yourself putting off writing something over and over again or it feels like a drag, that may well shine through to the reader too. Write the boring stuff too, but note where the electricity is.

Troubleshooting

All writers are different, but many of us experience the same problems.

“I keep rereading what I’ve written and trying to optimize it.”

Write yourself a note about what you want to accomplish the next time you sit down. Then, if you must, only reread the last couple of paragraphs before you start typing. Minimize the reorientation you have to do, and you’ll be less likely to get lost in the work.

“I hate what I’ve written lately.”

Totally ok. We all do sometimes. The more you do this, the more you’ll cringe when rereading old stuff – you’re growing, so this is actually a great sign. It’s great to have more than one thing to write going at any given time, so your ENTIRE WRITING SUCCESS doesn’t depend on solving the one and only problem before you right now. Alternatively: walk away for a while. It’s totally normal and good to put things down and not pick them back up for a bit. A lot of times, answers creep in at the quiet moments. My brain likes to give me surprise answers when I take walks and when I’m falling asleep.

“I don’t know what happens at the end.”

Very normal. Meander with your characters more. Dive into the middle. Explore your world, figure out colors and textures and smells. The end, like all the rest, is a work in progress. Things can change wildly between drafts, and that’s totally valid. I also like writing monologues from characters if I feel like I don’t understand their motivations or goals. Usually they’ll reveal something to me if I just give them space to talk. 

“I don’t like this novel business after all.”

Congratulations, my friend. You get to choose another hobby if this really just feels bad for you. And cheers to you for trying something new!

Takeaways

Thanks for staying with me through two very different subjects! Here’s what I hope you carry away with you.

The command line can do so much powerful, cool stuff, and you can do awesome things with the information it can give you.

If you want to write a first draft novel, you absolutely can.

As with so many things: butt in seat, hands on keyboard. You can do it.

Thank you <3

Diana Initiative 2021: The System Call Is Coming from Inside the House

Like ghosts, security vulnerabilities are the result of us going about our lives, doing the best we can, and then experiencing things going awry just because that’s usually what happens. We plan systems, we build software, we try to create the best teams we can, and yet there will always be echoes in the world from our suboptimal choices. They can come from lots of places: one team member’s work not being double checked when a large decision is at stake, a committee losing the thread, or – worst of all – absolutely nothing out of the ordinary. Alas, it’s only true: security issues are a natural side effect of creating and using technology.

In this post (and the talk it accompanies; video to come when it’s up), we’re going to talk about the energy signatures, cryptids, orbs, poltergeists, strange sounds, code smells, and other things that go bump in the night. That’s right: we’re talking about security horror stories.

Before we get started, here’s a little disclaimer. Everything I’m about to talk about it is something I’ve encountered, but I’ve put a light veil of fiction on everything. It’s just professionalism and good manners, a dash of fear of NDAs, and a little superstition too. The universe likes to punish arrogance, so I’m not pretending I’m immune to any of this. No one is – it’s why appsec exists!

Now: let’s get to the haunts.

Automatic Updates

I’m starting with one that’s probably fresh in everyone’s minds, and I’m mentioning it first because it’s such a dramatic betrayal. You think you’re doing the right thing. You’re doing the thing your security team TOLD you to do by keeping your software up to date. And then suddenly, you hear a familiar name in the news, and you have an incident on your hands. 

a white man in a blue shirt who has been stabbed by his own wooden stake. A dialogue bubble reads, "Mr. Pointy, no!"

This one is like getting stabbed with your own stake when you’re fending off a vampire; I thought we had a good thing going!

A drawing of a newspaper that says "OH NO" and then "YIKES YIKES YIKES"

Ideally, you’ll learn about it via responsible disclosure from the company. Realistically, it might be CNN. Such is the way of supply chain attacks.

You invite it in by doing the right thing. Using components, tools, and libraries with known vulnerabilities is in the most current version of the OWASP top ten for a reason, so there’s endless incentive to keep your stuff up to date. The problem comes from the tricky balance of updates vs. stability: the old ops vs. security argument.

A couple of jobs ago, we were suddenly having trouble after our hourly server update. Ansible was behaving itself, but a couple of our server types weren’t working right. After an intense afternoon, some of my coworkers narrowed it down to a dependency that had been updated after a long time with no new releases. It wasn’t even what you’d call a load-bearing library, but it was important enough that there’d been a breaking change. The problem was solved by pinning the requirement to an earlier version and reverting the update.

My own sad little contribution toward this not being the state of things forever was to make a ticket in our deep, frozen, iceboxy backlog saying to revisit it in six months. I was an SRE then, but I was leaning toward security, and I was a little perturbed by the resolution – though I couldn’t have suggested a better solution for keeping business going that day.

(It did get fixed later, by the way. My coworkers reached out to tell me, which was very kind of them.)

This has become one of those stories that stays with me when I review code or a web UI and find that an old version of server software or a dependency has a known issue. Even if it doesn’t have a documented vulnerability, not throwing the parking brake on until it’s updated to the most recent major version feels like doing badly by our future selves, even if it’s what makes sense that day. 

The best way to fix this is to pay attention to changes, check hashes on new packages, and generally pay close attention to what’s flowing into your environment. It isn’t easy, but it’s what we’ve got.

Chrome extensions and other free software

The horrors that lurk in free software are akin to the risks of bringing a Ouija board in the house that you found on the street

Behold: my favorite piece of art I made for this talk/post.

a Ouija board sits on top of a pile of trash bags in a brown puddle

You can identify it by listening for you or someone around you saying, “Look at this cool free thing! Thanks, Captain Howdy, I feel so much more efficient now!” Question everything that’s free, especially if it executes on your own computer. That’s how we invite it in: sometimes it’s really hard to resist something useful and free, even if you know better.

A particular problem with Chrome extensions is that you can’t publish them without automatic updates enabled, so someone has a direct line to change code in your browser, so long as you keep it installed.

Captain Howdy Efficiency Extension with picture of Regan from The Exorcist

Last fall, The Great Suspender, a popular extension for suspending tabs and reducing Chrome’s memory usage, was taken over by malicious maintainers. As a result, people who had, once upon a time, done their due diligence were still sometimes unpleasantly surprised.

That’s the tough thing about evaluating some of these: you have to consider both the current risks (permissions used, things lurking in code) and the possible future risks. What happens if this program installed on 500 computers with access to your company’s shared drive goes rogue? It makes it difficult to be something other than that stereotype of security, the endless purveyors of NO. But a small risk across a big enough attack surface ends up being a much larger risk.

In short: it’s the Facebook principle, where if you get a service for free, you might consider the possibility that you’re the product. Security isn’t necessarily handled as carefully as it should be for the product rather than paying customers. Pick your conveniences carefully and prune them now and then. (Or get fancy, if you control it within your company, and consider an allowlist model rather than a blocklist. Make conveniences prove themselves before you bring them in the house.)

unsafe-eval and unsafe-inline in CSP

watercolor of a brown book that says "Super Safe Spells, gonna be fine!"

Our next monster: an overly permissive content security policy. I’d compare it to a particular episode of Buffy the Vampire Slayer or any movie that uses that trope of people reading an old book out loud without understanding what they’re saying.

Fortunately, a content security policy is a lot easier to read than inspecting every old leather book someone might drag into your house

watercolor of a brown book that says "script-src 'unsafe-eval'" and "Good CSP Ideas"

We invite it in because sometimes,the risky choice just seems easier. You just need to run a little code from a CDN that you need to be able to update easily. You know.

For fending it off, I would gently urge you not to put anything in your CSP that literally contains the word “unsafe.” I honestly understand that there are workarounds that can make sense in the moment, when you’re dealing with a tricky problem, when you just need a little flexibility to make the thing work.

In that case, I urge you to follow my quarantine problem-solving process, for moments where you need to figure something out or finish a task, but the brain won’t cooperate.

  1. Have you eaten recently? (If not, fix that.)
  2. Does this just feel impossible right now? Set a timer, work on it for a bit, then stop.
  3. Can you not think around this problem usefully? Write out your questions.

I would suggest starting with “why do I think unsafe-eval is the best option right now, and what might I google to persuade myself otherwise?”

Sometimes you want to keep things flexible: “I’ll just allow scripts from this wide-open CDN URL wildcard pattern.” But what can this enable? What if the CDN gets compromised? Use partners you trust, sure, but it’s also a good idea to have a failsafe in place, rather than having something in place that allows any old code from a certain subdomain.

Look, it’s hard to make things work. You have to be a glutton for punishment to do this work, even if you love it. (And I usually do.) But you can’t just say yes to everything because of future-proofing or whatever feels like a good reason today, influenced by your last month or so of work pains. You can’t do it. I’m telling you, as an internet professional, not to do it. Because your work might go through my team, and I will say NOT TODAY, INTERNET SATAN, and then you’re back at square one anyway.

Stacktraces that tell on you

“The demon is a liar. He will lie to confuse us; but he will also mix lies with the truth to attack us.”

William Peter Blatty, eminent appsec engineer and author of The Exorcist

Logs and stacktraces contain the truth. Right? They’re there to collect context and provide you information to make things better. Logs: they’re great! Except…

Exception in thread "oh_no" java.lang.RuntimeException: AHHHHHH!!!
    at com.theentirecompany.module.MyProject.superProprietaryMethod(MyActualLivelihood.java:50)
    at com.theentirecompany.module.MyProject.catOutOfBagMethod(MyActualLivelihood.java:34)
    at com.theentirecompany.module.MyProject.underNDAMethod(MyActualLivelihood.java:27)
    at com.theentirecompany.module.MyProject.sensitiveSecretMethod(MyActualLivelihood.java:11)
    at com.theentirecompany.module.MyProject.oh_no(MyActualLivelihood.java:6)

…except for when they get out of their cage. Or when they contain information that can be used to hurt you or your users.

We invite it in by adding values to debug statements that don’t need to be there. Or maybe by writing endpoints so that errors might spill big old stacktraces that tell on you. Maybe you space and leave debugging mode on anywhere once you’ve deployed. Or you just haven’t read up OWASP’s cheat sheet on security misconfiguration.

How to fend it off: conjure a good culturally shared sense of safe log construction and remember what shouldn’t be in logs or errors:

  • Secrets
  • PII
  • PHI
  • Anything that could hurt your users
  • Anything that could get you sued.

Make a commit hook that checks for debug anything.

Secrets in code

“To speak the name is to control the thing.”

Ursula K. Le Guin

The monster I’d compare this to is when the fae (or a wizard, depending on your taste in fantasy) has your true name and can control you.

API_KEY = "86fbd8bf-deadbeef-ae69-01b26ddb4b22"

How to identify it: read your code. Is there something in there you wouldn’t want any internet random to see? Yank it! Use less-sensitive identifiers like internal account numbers if you need to, just not the thing that lets someone pretend to be you of one of your users if they have it.

You know you’re summoning this one if you find yourself saying, “Oh, it’s a private repo, it’ll never matter.” Or maybe, “It’s a key that’s not a big deal, it’s fine if it’s out there.”

We fend this one off with safe secret storage and not depending on GitHub as a critical layer of security.

We all want to do it. Sometimes, it’s way easier than environment variables or figuring out a legit secret storage system. Who wants to spend time caring for and feeding a Vault cluster? Come on, it’s just on our servers, it’s not a big deal. It’s only deployed within our private network.

It is a big deal. It is my job to tell people not to do this. Today I will tell you for free: don’t do this!

I’ve argued with people about this one sometimes. It surprises me when it happens, but the thing is that people really just want to get their job done, which can mean the temptation of doing things in the way that initially seems simpler. As a security professional, it becomes your job to help them understand that saving time now can create so any time-consuming problems later.

It’s great to have a commit check that looks for secrets, but even better is never ever doing that. Best of all is both!

A single layer of input validation

Our next monster: a single layer of input validation for your web UI. I quote Zombieland: always double-tap.

many snaky hydra heads baring pointy teeth

Our comparison for this one is anything tenacious. Let’s say zombies, vampires, hydras. Anything that travels in a pack. And we identify it by siccing Burp Suite on it. Maybe the form doesn’t let you put a li’l <script> tag in, but the HTTP request might be only too happy to oblige. We invite it in by getting a little complacent.

The best way to fend it off is to remember that regular users aren’t your real threat (and you’re probably just irritating people with names that don’t meet the definition of “normal” some shoddier validation will catch, which can do some awful racist things). There’s a reason injection, especially SQL injection, is a perennial member of the OWASP top ten.

Say it with me: client-side and server-side validation and sanitation! And then add appropriate encoding too, depending on what you’re doing with this input.
Most people do only interact with your server via your front end, and bless them. But the world is filled with jerks like me, both professional jerks and people who are jerks for fun, and we will bypass your front end to send requests directly to your server. Never take only one measure when you can take two.

Your big-mouthed server

curl -I big-mouthed-server.com

How to identify this one? curl -I thyself

We invite this one in by not changing defaults, which aren’t always helpful. This also relates to security misconfiguration from the OWASP top ten. The early internet sometimes defaulted too much toward trust (read The Cuckoo’s Egg by Cliff Stoll for a great demonstration of the practices and perils of this mindset), and we can still see this in defaults for some software configs.

Protect yourself from this one by changing your server defaults so your server isn’t saying anything you wouldn’t want to broadcast. The primary concern here is about telling someone you’re using a vulnerable version of your server software, so keep your stuff updated and muffle your server.

Let’s go X-Files about it: trust no one, including your computers.

ESPECIALLY your computers.

Don’t trust computers! They just puke your details out everywhere if someone sends them a special combination of a few characters.

In conclusion, sometimes I wish I had been born in the neolithic.

Laissez-faire auth (or yolosec, if you prefer)

Our next monster: authentication that accepts any old key and authorization that assumes that, if you’re logged in, you’re good to go. Its horror movie counterpart is that member of your zombie-hunting group that cares more about being a chill dude than being alive.

You can identify it by doing a little token swapping. Can you complete operations for one user using another’s token? The monster is in your house.

Here we are again at the OWASP top ten: broken authentication is a common and very serious issue. The problem compounds because we need to do better than “Are you allowed in?” We also need to ask, “Are you allowed to do THIS operation?”

We invite this one in by assuming no one will check to see if this is an issue or by assuming the only people who interact with your software have a user mindset. Or, sometimes worst of all, just not giving your devs enough time and resources to do this right.

We can fend it off by using a trusted authentication solution (OAuth is popular for a reason) and ensuring there are permissions checks, especially on state-changing operations – ones that go beyond “if you’re not an admin, you don’t have certain links in your view of things.”

I’ve seen a fair amount of “any token will do” stuff: tokens that don’t invalidate on logout, tokens that work if you fudge a few characters, things like that. It’s like giving a red paper square as a movie ticket: anyone can come in anytime. Our systems and users need us to do better.

The elder gods of technology

Ah, yes, the technology that will never leave us. Think ancient WordPress, old versions of Windows running on an untold number of servers, a certain outdated version of Gunicorn, and other software with published CVEs.

“That is not dead which can eternal lie.”

Which monster would I compare this to? Well, I’m not going to say his name, am I?

You know you’re at risk of summoning it when you hear someone say “legacy software,” though newer projects are NOT immune to this. Saying “we’re really focusing our budget on new features right now” is another great way to find… you know… on your doorstep.

We can fend it off by allocating budgets to review and update dependencies and fix problems when they’re found. And they should be sought out on a regular schedule.

No tool, framework, or language is bad. They all have potential issues that need to be considered when. For instance, there are still valid reasons to use XML, and PHP is a legitimate language that just needs to be lovingly tended.

Yep, we’re back at the risk of using components with known vulnerabilities from the OWASP top ten. No tool, framework, or language is bad, but some versions have known problems, and some tools have common issues if you don’t work around them. It’s not on them; it’s on you and how you use and tend your tools.

The real incantation to keep this one away is understanding why we keep these things around. No engineering team keeps their resident technical debt nightmare because they like it. They do it because it’s worked so far, and rewriting things is expensive and finicky, particularly if outside entities depend on your software or services.

I’ve never been on an engineering team that didn’t have lots of other things to do rather than address the thing that mostly hasn’t given them problems… even if none of the engineers without vivid memories of the 90s understand it very well.

Sometimes the risk of breaking changes is scarier than whatever might be waiting down the road, once your old dependencies cause problems… or someone on the outside realizes what your house of cards is built on.

Javascript :D

Speaking of no tool, framework, or language being bad: let’s talk about Javascript. Specifically Javascript used incorrectly.

Our horror comparison? It’s a little 12 Monkeys, a little Cabin Fever. Anything where they realize the contagion is in the water or has gone airborne. “Oh god, it’s everywhere, it’s unavoidable, it’s a… global… pandemic…” Oh no.

The way to summon this one is simple. Say it with me:

internet

Anything… internet. It’ll be there, whether you think you invited it or not.

To fend it off? Just… be very careful, please.

I’m not going to make fun of Javascript. Once I was a child and enjoyed childish things, like making fun of Javascript. Ok, that was like three years ago, but now I am an adult and have quit my childish ways. In fact, I made myself do it somewhere between when I realized that mocking other people’s tech is not actually an interesting contribution to a conversation and when I became an appsec engineer, so roughly between 2016 and the end of 2019.

The internet and thus life and commerce as we know it runs largely on Javascript and its various frameworks. It’s just there! And there have been lots of leaps forward to make it less nightmarish, thanks to the really hard work of a lot of people, but still, things like doctor’s appointments and vaccination slots and other giant matters of safety and wellbeing hang out on top of the peculiarities of a programming language that was created in ten days.

Unfortunately, new dragons will just continue to appear, because that’s the side effect of making new things: new features, new problems. The great thing for appsec that’s an unfortunate thing about humanity is that we make the same mistakes over and over, because if something worked once, we want to think we have it sorted. And unfortunately, the internet and technology insist on continuing to evolve.

How to fix it? Invest in appsec.

Sorry. It’s a great and important goal to develop security-conscious software engineers and to talk security as early as possible in the development process. But there’s a reason writers don’t proofread their work – or shouldn’t. Writing and reviewing the work are two different jobs. It’s why healthy places don’t let people approve their own PRs. If we created it, we can’t see it objectively. And most of us become sharper when honed by other perspectives and ideas.

The most permissive of permissions

And finally, the monster that’s nearest and dearest to my heart: overly permissive roles, most specifically AWS permissions. I’d compare this to sleeping unprotected in the woods when you know very well they’re full of threats.

You can identify it by checking out your various IAM role permissions. And yes, this is totally a broken authentication issue too, a la OWASP.

The best way I know to invite this one in is by starting out as a smaller shop and then moving from a team of, say, three people who maybe even all work in the same room, doing everything, to thirty or fifty or more people, who have greater specialization… yet your permissions never got the memo.

And the best way I know to fend it off is to make more refined roles as early as you can. I know, it’s hard! It isn’t a change that enables a feature or earns more money, so it’s easy to ignore. Worst of all, as you refine the roles, reduce access, and iterate, you’re probably going to piss off a bunch of people as you work to get it right. IAM: the monster that doesn’t need a clever analogy because getting it right sucks so bad on its own sometimes!

It’s also the monster that lurks around basically every corner, because IAM is ubiquitous. So it’s everywhere, and it’s endlessly difficult: awesome! Alas, we have to put the work in, because messing it up the most efficient way I know to undermine so much other carefully done security work.

AWS permissions that allow access to all resources and actions

And yet I feel warmly toward it, because IAM was my gateway into security work. It’s the thing I tend to quietly recommend to the security-aspiring, because most people don’t seem to like doing it very much, yet the work always needs to be done. Just showing up and being like, “IAM? I’d love to!” is a highly distinctive professional posture. Get your crucifix ready and have some incantations at hand, and you’ll never run out of things to do. It’s never not useful. Sorry, it’s just like the rest of tech: if you’re willing to do the grunt stuff and get good at it, you’ll probably have work for as long as you want it.

Whew. That’s a lot of things that go bump in the night.

Let’s end with some positive things. I swear there are some. And if a security engineer tells you that there are still some beautiful, sparkly, pure things in the world, that’s probably worth listening to.

Strategies for vulnerability ghost hunting

Don’t be the security jerk

Be easy to work with. This doesn’t mean being in a perpetually good mood – I am NOT, and former coworkers will attest to this if you ask – but it means reliably not killing the messenger.

People outside of your security team – including and especially people who aren’t engineers – are your best resource for knowing what’s actually going on. Here’s the thing: if you’re a security engineer, people don’t act normally around you anymore. You probably don’t get to witness the full spectrum of people’s natural behavior. Unfortunately, it just comes with the territory. And it means we have to rely on people telling us the truth when they see something concerning.

Even people who are cooperative with security will hedge a little sometimes when dealing with us. I know this because I’ve done it. But you’ll reduce this problem if everyone knows that talking to security will be an easy, gracious thing where they’re thanked at the end. Make awards for people who are willing to bring you problems instead of creating a culture of fear and covering them up! Make people feel like they’re your ally and a valued resource, because they are!

Be an effective communicator

Being able to communicate in writing is so important in this work. Whether it’s vulnerability reports, responsible disclosure, blog posts warning others about haunted terrain, or corresponding with people affected by security poltergeists, being able to write clearly for a variety of audiences is one of our best tools.

If you think you’re losing your grip on how regular people talk about this stuff, bribe your best blunt, nontechnical friend to listen to you explain things. Then have that blunt friend tell you when what you said didn’t make a goddamn lick of sense… then revise and explain again. Do this until you’re able to explain things in plain language accessible to your users and spur them to action using motivations that make sense to them

Now let’s leave this haunted house together and greet the coming dawn.

I hope you encounter none of these terrifying creatures and phenomena that I described, that your trails from server to server in our increasingly connected world are paved with good intentions and mortared together with only the best outcomes. But should you wander into dark woods full of glowing red eyes and skittering sounds belonging to creatures just out of sight… I hope you are better equipped to recognize and banish them than you were earlier. Thank you for reading.

Day of Shecurity 2021: Intro to Command Line Awesomeness!

Recently, I hugged a water buffalo calf. It was approximately as exciting as giving this talk is, which is to say: VERY.

Updated: now with video!

First, the important stuff: here’s the repo with a big old markdown file of commands and how to use them. It also includes my talk slides, which duplicate the markdown (just with a prettier theme, in the way of these things).

Second: I went to the Lookout Security Bootcamp in 2017, one of my first forays into security things (after some WISP events in San Francisco and DEF CON in 2016). That’s where I conceived of the idea of this talk. There was a session where we used Trufflehog and other command line tools, and we concluded with a tiny CTF. Two of the three flags involved being able to parse files (with grep, among other things), and I was familiar with that from my ops work, so I won two of the three gift cards up for grabs. I used the money to buy, as I remember, cat litter, the Golang book, and a pair of sparkly Doc Martens I’d seen on a cool woman in the office that day. I still wear the hell out of the boots, including at DEF CON, and I still refer to them as my security boots.

I spent the rest of that session teaching the rad women around me the stuff I knew that let me win those challenges. This had two important effects on me. The first was that I thought, “Wait, it might be that I have something to offer security after all.” The second was that I wanted to do a session someday and teach these exact skills.

I went to Day of Shecurity in 2018 and 2019 too. It’s a fabulous event. At the last one, just a handful of months before we all went and hid in our houses for more than a year, I went to a great session on AWS security (a favorite subject of mine) by Emily Gladstone Cole. And I thought: oh, there it is. I’m ready. I told my friends that day that I wanted to come back to DoS as a presenter. And I pitched the session, it got accepted, and after a fairly dreamless year, one of mine came true.

So if you’re reading this: hello! These events really do change lives. The things you do really can bring you closer to what you want. And, as I like to say in lots of my talks, there is a place for you here if you want to be here. We need you. Keep trying.

I wrote about my own journey into security here. Feel free to ask questions, if you have them! I love talking about this, and I would like to help you get to where you want to go.

Bugcrowd LevelUp 0x07: How to Do Chrome Extension Code Reviews

This is the blog counterpart of my 22 August 2020 talk for Bugcrowd’s event, LevelUp 0x07.

A year ago right now, I was an SRE, and my only thoughts of Chrome extensions were that they were 1. something that existed, and 2. useful but also prone to news-making security problems. This year, though, I moved to an appsec engineer role, and suddenly they’re a pretty big part of my professional life. 

Since I became an engineer, I’ve been fascinated by the kinds of threats that live in places people are less prone to suspect – or even to consider. Once I learned that my current team sometimes review extensions, in addition to the third-party vendor testing that’s a more central part of our responsibilities, I realized I’d found another one of those somewhat neglected areas of security review. Challenge: accepted.

So that’s why I like Chrome extension reviews, the process of which I’ll lead you through in just a few paragraphs, I promise. But why should you care about this? You as an appsec enthusiast or engineer, you as a bug bounty hunter, you as someone else with an interest in the strange corners of the internet that vulnerabilities can hide in. 

Ok, but why do these reviews?

Extensions open up a really interesting attack surface. They’re part of the browser, so embedded in the client, which can let them sidestep some of the protections an unaltered browser might offer. Beyond that, people often have alert fatigue and don’t think that much of a warning like “this extension can read and change your data on every website you visit.” It should be alarming, but that warning applies to so many extensions that it’s easy for people to ignore. 

They also update themselves automatically, on a timetable determined by the browser using the update path that has to be part of any extension listed in the Chrome Web Store. So you have third-party code nestled right in where the business happens, AND said third-party code can change with no warning at a cadence that isn’t set by the end user. 

Last year, Google made some moves to bolster user security, making permission scopes around Gmail and Google Drive more granular, while requiring developers using these scopes to use the smallest access to information possible. Useful; not a panacea. Alas, the Chrome API permission scopes are still fairly broad.

Google has done some mass culls of extensions, and the team behind CRXcavator (which we’ll get to shortly) has uncovered a lot of, uh, interesting extension functionality too. 

However, the automatic updates still leave even polished products open to strange second and third acts. Bad actors buy popular extensions, which then give them a direct line to thousands and thousands of browsers, which are used by folks who generally aren’t extremely vigilant about monitoring their installed extensions. By which I mean: just people, in general. Most of us don’t do a monthly extensions check, which is understandable – but it does leave the door open to some interesting things. 

Including, happily, bug bounties. 

How to review Chrome extensions: tl;dr edition

When I review a Chrome extension, I start by building a story about the extension I’ve been assigned. I research the proposed use case, read the extension’s Chrome Web Store page and other online descriptions, and do a cursory review of the code and try to learn these things:

What does this extension say it wants to do?

What does it actually do? Is that answer different?

What permissions does it need to have to accomplish its stated mission? 

Do the permissions or code make anything possible outside of the extension’s stated mission?

This is where I figure out where to focus. Most extensions, like most people, are on the up and up. But – and sit down for this one – sometimes developers get sloppy when they’re writing extensions. I wouldn’t even call this the developers’ fault a lot of the time, because Chrome API permissions are broad, and most devs are strapped for time. It’s a recipe that makes for some messy stuff ending up in extensions. 

How to review Chrome extensions: in depth

Once I’ve got a sense of what the extension is supposed to do and what it actually seems to be doing, I work through the code in three steps. 

First, I paste the extension’s Web Store ID (long, all lowercase letters, part of the URL of its Web Store page) into CRXcavator. CRXcavator gives you an aggregated risk score for the extension, based on several metrics. The most relevant to our needs are the content security policy and Chrome API permissions. (Others include externally hosted Javascript libraries and its Chrome Web Store Score.) Here’s CRXcavator’s score for one of its associated extensions.

We’re going to focus more on other parts of the manifest for the purposes of this post, but content security policy can also make a lot of good, weird stuff possible that perhaps shouldn’t be. Like I said, this setting can literally override the settings of individual websites, so it’s worth spending some time looking at the CSP of an extension as part of your broader exploration.

CRXcavator gives me an early indicator of where risk might live. You can also read through the code on their site, or you can do what I do and use an extension like CRX Extractor/Downloader to get it locally and explore in your text editor. 

Once I have the code in front of me, I read the manifest.json file.

A text editor showing a sample manifest.json file

This is where I spend a lot of my time, because it tells me a lot about what’s possible. These files can get long and intricate: there are 56 different fields that can be included. However, most of the extensions I see don’t use a ton of those. Only three are required, and only four are marked in the docs as recommended. That excludes content_scripts, though, so pretty much all extensions have to go a bit beyond the minimum in order to be able to actually do much. 

In manifest.json, I look most closely at the permissions listed, both the Chrome API permissions included and the URLs cited. From there, I read through the different Javascript files being used and what they’re going to be allowed to do. 

Finally, I read the code. I’m looking for weird uses of permissions (chrome.identity and chrome.cookies are fun, but I look up all uses of any permission methods used), any connections to third-party servers, and anything else that seems overly ambitious. 

Assessing the risks

Permissions are the sketch of what can happen; the code shows what will actually happen with them. Because of that, I spend a fair amount of time on those permitted Chrome API operations. The permissions don’t encompass all the possible risk in an extension, but they cover a lot. 

Here’s what I’m looking for across all of the extension code.

Is it sending data? If so, to where? I want to see innocuous and clearly described changes to the appearance and contents of the DOM. I do not want to see data being collected or sent. If I do see data being collected or sent, I have found the new primary focus of my time for the duration of my testing. There are legitimate reasons to do this, of course, but they need to be safely done and well justified. 

Is it storing anything unsafely in your browser? I don’t want to see plaintext secrets stored in the browser, not in session storage and especially not in local storage. This is something I touch on early because of my job’s context; it may be less important to you, but it’s still worth looking into.

Do the scopes of the Chrome API permissions let the extension snoop on everything we’re doing? We don’t like that much either. 

Is the extension snooping on everything you’re doing and then sending it somewhere else? Oh dear. 

Again, there’s opt-in functionality that can justify all of this. But it does need to be justified (from my point of view as a reviewer of risk), and then it needs to be done safely.

This second part is where the bug bounty hunter can have a lot of fun. 

Anatomy of an extension

Let’s start with manifest.json, since it’s where I spend most of my time and it most concisely shows a lot of the weird possibilities in a given extension. The key parts I look at are the general permissions of the extension and then content_scripts and their associated permissions 

Seeking bounties? manifest.json is more likely to be a hint at what will lie in the rest of the Javascript, which is where you’ll find what you’re actually looking for. The really interesting stuff will be in the code, of course. Some extensions have a single fairly simple JS file; others can have files upon files. The official size limit for a CRX file, which is the zip-formatted file format for a Chrome extension, is 2 GB. That’s a lot of room for function and also just utter mayhem. 

Most extensions also have an HTML component, often for creating a customized menu for the extension. If you see popup.html, it’s likely this. These are subject to the same vulnerabilities as any webpage, so it’s worth testing for XSS and anything else you’d try to find in a front-end. 

Permissions and their risks

Permissions for an extension come in two forms: URLs (specific ones or patterns) and Chrome API permissions. These cover all kinds of possible interactions: alerts, accessing and altering browser storage, accessing bookmarks and tabs, among many others. 

There’s a safe use for every permission, but there are also permissions that, if included, are a great place to start when reviewing an extension to make sure it’s safe for the user. Problems arise when the dev needs to invoke a permission or a manifest.json field, like web_accessible_resources, that is necessary for some perfectly ordinary extension function, like access packaged resources to use in the context of a web page. Unfortunately, that same permission can be used to execute remote scripts. Like most anything involving Javascript, there are risks that come with using the tools at hand. 

Google has a great guide where they list the permissions they think are most dangerous, with surprisingly high specificity. Removing the escape key’s ability to exit full-screen? Spying on the user with their own camera or microphone? Nightmares! But all are possible with the correct, terrible combination of host and API permissions.

Let’s look at a few of these permissions, now that I’ve spent some time broadly demonizing them.

URLs and pattern matching

For an extension to access a site, its URL must either be specified or match a pattern. Conveniently for devs, there are patterns that make matching easy. Conveniently for bug bounty hunters, this means that lots of devs leave the doors more open than is ideal because they’re trying to futureproof their extension (or, less commonly, because it’s something that has any business doing something on every site you visit). manifest.json can include and exclude URLs based on these patterns, but including is much more common. 

The patterns and matches I’m most wary of are *://*/* and <all_urls>, with a close second anything that matches a common host (google.com, for instance) with asterisks before or after. *://*/* and <all_urls>, however, match the entire internet, so long as the address being visited is a permitted protocol (http, https, ftp, or file). These permissions give an extension carte blanche to work on any website you visit. Your email, your bank, your employer’s web portal when everyone’s working at home… I find this permission hard to justify. It isn’t that no extensions should use it, but it should be relatively few. Google themselves declares wide-open wildcard patterns to be the highest-risk type of permission, with <all_urls> the next most dangerous. 

URL permissions can be at the top level permissions in an extension’s manifest.json or a match connected to a particular content_script. 

On top of allowing an extension free access to all of the user’s browsing, including those wide-open permissions also allows cross-origin requests with none of the usual barriers your browser would present. It also allows Javascript injection on any site the user visits. Fun!

That said, one reason people do this (for non-nefarious reasons) is precisely to avoid CORS errors. However, I promise this can be done effectively and more securely – it just requires a little more work to narrow down exactly what URL or pattern needs to be put into place. Beware asterisks; especially beware asterisks with only a host or no host at all. On the plus side, it opens up a lot of interesting territory for the bug bounty hunter. Silver lining, I guess? 

webRequest

webRequest lets the extension change and add HTTP headers. It also lets it add listeners and change behavior upon receiving the first byte of a response, at the initiation of a redirect, when a request is completed, and anywhere else in the lifecycle of a web request. This can, of course, be used totally legitimately. However, it’s also been used to intercept and forward user traffic, so if it shows up in the manifest, it’s worth looking at every one of its methods that’s used in the code. Another check on this behavior is that this permission also has to be paired with an appropriate host permission to be able to do anything. 

This permission is likely to be in flux in the future, but for now it’s still here and still weird enough that even the EFF has weighed in on its present and future

cookies

The cookies permission offers methods around getting, setting, and removing cookies. This can result in a lot of weird possibilities, particularly when paired with gleaning data and sending it elsewhere. Fortunately, the solution for this is the same as for XSS: set that HttpOnly flag on your cookie so it isn’t accessible via the Javascript of a content_script. However, if people faithfully set that flag, my job might not need to exist, so it’s worth mentioning. This also requires appropriate host permissions to be able to do anything. 

The code, more generally

The manifest.json file can point to a few different uses for Javascript files. The background section of the file, which is optional, includes code that is loaded when needed, including when a content script sends a message, when certain events happen, or when the extension is first installed or updated. In contract, content_scripts (which, as I mentioned, have their own permissions) run in the context of webpages. 

When I read these Javascript files, the things I watch for are shaped by the particular concerns of my team at my very large company. My primary concerns, however, are pretty universal: how the extension might be accessing, storing, and sending information. I watch for any input text being saved or otherwise manipulated. I find out what is being collected, if the collection and handling of that information supports the story I’ve put together about the extension through my research, and, if not, where the two diverge. After that, I check for http requests. If I find them, I look at what they’re doing. What’s being stored and sent? How is it being sent, and where is it going? Is it something the user of the extension will have opted into via an account, or is this being done more quietly? 

I also watch for tells like the use of innerHtml – basically, if it’s something you’d watch for when reviewing webapp code, it’s worth trying on the menus and other popups that are part of an extension. 

Good news for bug bounty hunters

Companies’ official Chrome extensions can create weird and surprising vulnerabilities for their customers. Think about the permissions involved in an extension that alters text typed into a field. It can read what you type, store it, and send it to be judged by an algorithm across the internet, with the goal of furnishing, say, spelling corrections in a way that reads as nearly instant. We type a lot of sensitive things into text fields – passwords, credit card numbers, questions to our insurance companies – and I personally don’t trust an extension to have fine enough control to not at least process the text written in fields other than the ones it’s supposed to be targeting. 

Let me tell you: even well-funded companies have too-loose permissions and other weird stuff happening in their extensions. It’s the nature of this very particular medium. Let me also tell you: there are bounties for reporting extension issues, both from Google and sometimes from the companies that make them.  

Because extensions are both everywhere and, in my experience, undervalued as a bug bounty target, I think the world of extensions in particular would benefit from your attention. Beyond what the permissions and code can yield the first time you look at an extension, the particular opportunity of how extensions are updated offers a lot of possibilities for repeated investigation. The releases for extensions, unlike those for websites, are numbered. You know if something changed, enough that Chrome has a page for it. Chrome also has its own program, in addition to extensions being in scope for many companies with existing bug bounty programs. People get paid for extension findings.

Extensions have depths that can contain a lot of, let us say, interesting things. I hope they’re inviting enough for you to dive in and start finding vulnerabilities that make this a safer ecosystem. 

Want to learn more?

If the dazzling galaxy of links throughout this post wasn’t enough, I’d suggest two more. This tutorial is how I created my first extension to install locally to better understand the process. This 2016 DEF CON talk isn’t completely applicable to the capabilities of extension today, but it’s an excellent guide to how good and weird it has been possible to get with extensions.