Erica Anderson
Erica Anderson
COO and Co-Founder @ SafeStack
Erica has played many roles in security: consultant, engineer, tester, analyst, incident responder, and even instructor. She has worked with many people and orgs, and knows what it feels like to try and move fast while staying secure - she even co-authored a book on the topic!
Erica is the Co-Founder, COO, and generally huge security nerd at SafeStack, where their goal is to help development teams (of all shapes and sizes) build secure things. She also does heaps of work in the community too. She is a lead organiser for a New Zealand infosec conference called Kawaiicon (Kiwicon), and also a Trustee for a charity called Digital Future Aotearoa. She also wrote a book about it called ‘Security For Everyone’, published by Holloway.
Including security in your engineering culture
Transcript
Prae:
Next up, we have someone who is a self-proclaimed security nerd.
She has written books.
She has also organized or is a lead organizer of Kawaiicon.
She has done heaps of community stuff and she is also a co-founder of a security education software company called SafeStack.
Please welcome the co-founder and COO of SafeStack, Erica Anderson!
Erica:
Hello good afternoon everyone.
Just out of curiosity, we've had some really brilliant talks and I feel like everything's been set up perfectly for me to come nerd out about security with you all.
Would really be interested for those of you out in the audience, who here feels that cyber security or security is part of what your team does?
Raise your hands.
Oh there's so many more hands than last time I asked this question.
This is really exciting.
Who out there is interested in security and would love to find maybe ways that you can build things that are a bit more secure; who cares about security?
Okay, so most of the hands which is really good.
So, it's really nice to be here.
As Prae mentioned, I'm a huge security nerd and I probably have seen you all throughout various different points of the software or security community.
My name is Erica.
I'm the co-founder and COO of SafeStack and that's what we do: we talk about security and we try and teach software development teams - everyone on the development team, not just the engineers and developers - what they can do about security.
And really when you think about security, the very first thing you think about is probably this perception in your head of what someone in security looks like.
And it's got a pretty rough reputation.
I mean as someone in the security community, I definitely know that some of my peers have their challenges, I guess.
And when you think about it, the very first thing that a lot of us, without having any work experience working with security, the very first thing you think of is some type of sci-fi-type movie where there's an underground community of nerds and weirdos who are out there to stop some lethal malware from taking over the world, silently, without anyone knowing.
And as a con organiser myself, and as a member of the security community, I am a self-proclaimed nerd and weirdo, so I accept this perception that comes with it.
But that's not exactly what security is all about.
But to be honest, who didn't grow up watching The Matrix and idolising Trinity and her cyber rain and all of the trench coats and what-not, but that is the first thing that you think about when you think about security.
Those of us who have worked experience - I know definitely from some of the questions that were asked earlier is that security feels a little bit like a gatekeeper, right?
There's a really good question earlier about, "How can our teams get involved in stuff like this?"
"How can our teams maybe interact better with security?"
And my ears pricked up and I was intently listening because a lot of us have the experience of it's that gate that you must pass.
And a lot of the times it's kind of the team - I mean, I've worked in many places across New Zealand where it was seen as the "Department of No."
And there's a long queue of people coming to ask for permission from security only to get a really difficult response.
Or interacting with them in a way that made them feel like they couldn't have the choice or didn't have the freedom to be able to do the good security thing on their behalf without asking for permission.
So, overall, putting aside the whole sci-fi cyberpunk aesthetic and vibe that you get from security, overall, it's a bit rough.
But it's been getting better.
I do genuinely believe that and I try to push to try and encourage everyone to feel that security can be part of building awesome things and building amazing things.
For those of you who are in organisations of various shapes and sizes, one of the common perceptions that I have come across is that no matter which end of the spectrum you're on, you feel that maybe now is not the time to think about security.
When you're on a small team, at a small startup, you probably feel like,
"Security's too sophisticated"
"We're too small"
"It's too expensive for us to get started"
And as time goes on and as that business grows, on the other end of the spectrum when you're working for some larger, perhaps global organisations, the too expensive probably gets even more expensive, but it starts to feel,
"We can't do it because it's too restrictive"
"It's going to slow us down"
"We're not going to be able to provide the best features or provide those experiences that we need to provide"
"It's too slow"
"It's too hard"
And no matter where you look at the spectrum of the size of the organisation, there's always going to be that really good excuse for why security might not be a good time.
And there's reasons and context behind those reasons that are really mindful to keep in mind.
Security isn't a simple thing that you can purchase off of a shelf no matter how vendors might [inaudible]...it's something that really changes based off of the context of the systems that you're building or the teams that you're working in.
So, when you're a part of a small team or a small startup, you've got small teams, you've got quick releases, you've got limited complexity, because maybe you've just built your MVP, and you're living in this green fields of opportunity.
And on the other side of the equation, when you're working for large organisations, you have some of those legacy monolithic systems that can be hard to update and apply security fixes for something that's recently been released or recently been fixed.
You've got something like these big niche teams, that kind of maybe don't communicate as well. They're slower releases; there's higher complexity.
So, it's definitely different types of security challenges depending on where your organisation is at.
Now, one of the things from my experience of working with organisations, literally from one end of the spectrum of a team of one, all the way through to massive global teams, both as an external consultant and also an internal security engineer, is that it's something when you don't start thinking about it and start embedding it in what you do early on, it becomes a bit of a stormy cloud.
When you're quite small and you're moving quite fast, you're very quickly going to be growing, you're going to be having to take on customers who give you their data - their valuable data - to run your product or run your system, and that data to them is really valuable.
But it's also protected per different types of legislation or regulation.
And very quickly what happens when you start emerging from a small little team as a startup into something that has a pretty sizable book of business, is that you're going to have to start telling your customers what you do for security.
There's an expectation that you're going to start, although you are a small maybe scrappy team and a small product, they still expect that their data is going to be respected, their data is going to be kept secure, and that expectation starts to compound as you make your way through.
You might move into - you might start offering your software into locations that have rules and regulations that are different from where you currently operate from.
You start to build and build your software even bigger, and you start to grow what we call the attack surface, which is the surface of what people can poke and prod to try and do something bad, and that will start to get bigger the bigger your software gets.
And ultimately you get at the other end of the spectrum where you're in a really big organization with a really, really big software footprint, and there's so much software in use, and we all know that software needs to be updated, and can you imagine trying to roll out a patching process in a global organization that has hundreds and hundreds of different types of software?
It'd be massive and be really quite challenging.
And what we've spoken about all day today is that really all of these disciplines and all of these things that we're talking about, whether or not it's security or it's culture or it's anything else, it's not a matter of just switching it on, right?
You can't just turn on one DevOps and then on you go, and it's the same thing goes for security.
It's something that is something that we start to practise, which is why I'm so, so excited to be listening to some of these talks, because it's very much the same thing.
Security isn't really something that's reserved for just a certain type of nerd or a certain person who has a skill set to be able to hack something or break into something.
Security is something that all of us can build as part of our practice, and it's something that definitely something that Adam touched upon, is something that was touched upon in the panel was that: think about your sphere of influence.
There's definitely stuff that everyone here, within your own teams, no matter how big your organization, that you can have a positive influence to build a better security culture.
Is it going to change the entire organization?
Probably not.
But it probably means that the stuff that your team ships and the stuff that you push out is going to be more secure, and your customers who use that particular feature or that particular service will be a little bit happier, and that stuff starts to snowball.
That becomes something that other teams can copy.
They can look to you for a bit of influence and inspiration and perhaps copy it for their team.
Now, no doubt, yes, a lot of your large organizations will have security teams, and it's always important to definitely bring them along with you, but there's nothing to stop you, and there's a bunch of small things that I've seen that have been really, really successful that I wanted to share with you all as ideas for things that you could do to build that security culture in your teams.
Now none of these are perfect, but it's just some examples of some of the things that I've seen be successful in its small little pockets in small little communities.
So, the very first one is starting with a team that you have.
Some of you might work for small teams who don't have a security person on your team.
Your security person might be Google, and you might Google things and check things or ask people on Discord or Slack channels and whatnot for how they might have done something around security, or how they responded to that report that came into the security@ inbox asking about a particular bug that they might have found in your products.
How do I respond to that, right?
So, you don't have to have a security team or a specific person to help do it to help start doing anything about security.
You can definitely start within the team that you have.
I'll let you in on a bit of a secret.
I have been in security teams.
I've been a security engineer and I've been an analyst and I've been a consultant, and people have brought me in to help point out the security risk of their systems.
There is no magic to that.
Literally most of those conversations start with the other person who actually built the system talking to me about how it's built and all of that context and all of that information: What data is collected?
Why do we collect it?
Where does it go?
How do we make those decisions?
All of that context is so, so, so fundamental, and actually that's the secret to identifying the security stuff.
It's not having me, someone who has a security background, it's just a matter of talking through how those things were built and understanding how we made those decisions and what inherent security decisions we made from that.
So really, you all as the builders and the development teams, you all have the superpower to answer most of those questions.
So definitely starting thinking about it and starting with the team that you have.
And as part of starting with that, something that you could do is just understanding your customers and and understanding your industry.
A lot of you obviously wouldn't have created the organisations that you work for, but there's a lot of decisions around: Why is it that we provide this product?
What is some of the thinking behind it?
What are some of the early decisions that we've made?
Why did we go for this particular type of tech stack versus this?
Understanding those things can really give you a good amount of context to help make it better and make it more secure.
So starting there and really knowing your industry, knowing your customers is a really, really good first start.
And then of course, knowing your system.
Especially if you're new to a team, sitting down at the team and really understanding.
Me personally, I'm a very visual person.
I like to get out with a whiteboard and a couple of markers and just draw it out and understanding exactly,
Where do our users interact with our system?
Where do they do really important things like authenticating, and proving who they are?
Where do we take their data?
Where do we put that data?
Where do we even process that data?
All of that stuff is knowledge within your teams and that really helps drive a lot of the security stuff that you can do.
So very, very early on starting with the team that you have, and then from there, starting with the tools that you have.
I've worked for quite a few New Zealand's large organisations, quite a few that we've talked about today, and unfortunately a lot of the things that I would have come across is so many people within the organisations trying to push for a particular product or particular thing, and "this is going to be the surefire thing that's going to solve our security problems," and we get so preoccupied with trying to figure out how to purchase and embed these big tools that we forget a lot about the tools that we do have.
So my next piece of advice is: think about the tools that you use to build your software, build your products, thinking about where is your source code actually hosted, thinking about the infrastructure that it runs on.
You've got a plethora of probably SAAS tools and build tools that you use and starting out with just the basics of protecting those accounts.
It comes down to some of those basic security advice that we hear from the likes of CERT New Zealand or other organisations around multi-factor authentication.
If those things have software that you can update, keeping them updated.
But really, thinking about the stuff that's within your realm of influence to protect, rather than having to think about whether or not we need a particular type of software product or thing to solve those things.
Really a lot of the incidents that I've personally been involved in back when I was part of CERT New Zealand, the country's incident response team, a lot of them honestly come back to some of the securing, some of the support tools that are around the edges rather than the actual more complex attacks that you might see.
So definitely starting with securing the tools that you have.
The next couple I wanted to chat through are things that I think anyone here could relate to whether or not some of these things might be within your realm of influence.
It probably depends on the role that you play or the team that you're part of, but every single thing across what a development team might do has some element of how you can do that securely, and often with those things you don't necessarily need to buy a widget or buy a tool or perhaps need really expensive consultants to come in and tell you what to do.
There's small things that can really cover that foundational level.
The very first one is configuring and deploying infrastructure.
So this is something that a lot of us understand: the concept of shared responsibility.
Just because you know you're using an as-a-service model doesn't mean that security is instantly no longer your problem.
So shared responsibility.
At the end of the day there's lots of configurations and things that you can do as part of your building to make sure that your spot is covered.
Now can that vendor still have a bad day and cause you to have a bad day?
Yes, that is also part of the shared responsibility, but what I'm saying here is that it's important to think about the tools that you use and what you can do to make that more secure.
And the key to this is: chances are with the tools that you use with a really simple search online finding what they call security hardening guidelines and those are things where they say, 'Hey, here are some of the configurations that are most important when it comes to security.'
And this is stuff within the stuff that you build have the ability to control.
So that could be, for example, if you're standing up a whole bunch of servers within AWS, there's a few key configurations that you can make sure that is part of your cloud formation that those things are automatically configured so you don't have to think about it each time.
That also, pro-tip, will save you from getting a phone call probably, if you do have a security team, later on because something might have been exposed to the internet unknowingly.
So by building those things up front, you're doing your bit to make sure that the tools and the things that you use are built secure from the start, and the key word to that is: "security hardening guidelines".
The next one is if you're in the realm of testing or if you help out with the testing practice within your teams, security testing - although yes it's very cool;
you've got penetration testers and they do all these really cool red team things and they're really, really neat and the talks are really cool - all of the stuff that they do is is totally accessible to other people as well.
A lot of the build tools that we use nowadays or the source code repos where we store our stuff nowadays has a lot of this tooling actually built in.
If your stuff is hosted in GitHub or GitLab and they've got a whole bunch of built-in integrated tools that will do your vulnerability scanning for you, it will go through and scan for common vulnerabilities within the code that you have.
So my biggest advice here is use built-in tools in the stuff that you already have.
Just the same as you might build test cases for how your users are meant to use a particular feature, you can always inverse that and think about the bad use cases.
So for example if we're trying to build some changes into how people authenticate with our service well think about the reverse.
You obviously want someone to be able to log into your system and enter in their username and password and hopefully be prompted for a second factor of authentication but what about the reverse?
What if someone was able to throw a whole bunch of garbage credential information that they found online on the internet to see if anything works?
At what point are we going to stop that from happening?
How might we know that that's happening?
And just trying to reverse some of those positive use cases for negative use cases.
Now, does that mean that you do that for every single time you do testing? Probably not. But probably the most important ones which is usually around when you're getting into authentication or getting into accepting input from people.
So that's just something small that you can start incorporating when you're building out some of your test cases.
The next one is: A lot of you will build in a certain layer of logging as part of the systems that you're building because you want to be able to tell when your system or application might be having a bad day or to be able to troubleshoot something that happens later.
Can I highly recommend that as part of some of those things that you're building in to consider some of your "oh crud" moments.
You know and there's some really key moments that you can build into those logs that you ship to that dashboard or to your Datadog server or whatever you might use.
A lot of these things have a security benefit but also a performance benefit too which is a double win.
Knowing when your server has been constantly restarting or shutting down.
That's a bit of a red flag that someone is messing with our servers and causing it to do things that it shouldn't be doing.
Definitely authenticating to your server is usually the first big red flag from a security perspective, but it's also something that is helpful from a usability perspective as well.
And then, of course, when these configurations change, if someone's turned off logging, not only is that like turning off your visibility of being able to maintain your platform but also it's a really big concern from a security perspective because someone's trying to cover their tracks.
So these are, again, when your teams are at that point it's just a matter of starting to think about what is that one small additional thing that I can think about to make sure that what we're building is not just resilient and this thing and we're not just able to maintain it but we can maintain the security of that as well.
Threat Modelling is something that Jade mentioned earlier and I was sitting in the chair in the back of my ears just kind of like perked up and I got really, really excited, as I tend to do when I hear the word threat modelling.
I'm a really big fan, a huge security nerd, but this is something that again 100% I'm a firm believer that the power of having a conversation and performing what they call a threat model - which is basically looking at a feature or looking at part of a system or a whole system which is pretty massive but trying to think about looking here are our biggest security problems where is our biggest risk - and the magic about running these sessions is that again I can facilitate that.
Jade mentioned that, yes of course you can go out and ask your security team to come and run the session for you, but honestly I just be asking questions because I have none of the answers and there's so many frameworks and models out there that you can run through because all of the knowledge and all of the power lays in the team.
So getting the whole team together, collecting together that knowledge and that context and be able to talk through it and run through it together is so powerful because what it starts to pull apart and identify is that we might have inherently made a decision to, for example, expose something to the internet that didn't need to be exposed.
Well, did you know putting anything on the Internet is just usually not a good time if you don't need it to be there? So that's a decision you can call out right away because, guarantee you, usually re-engineering something to that degree is really, really painful afterwards.
But all it starts with is a question on a whiteboard and talking through and asking a few questions.
Now, if threat modelling is something that you're like, "oh, this is really interesting; we're working on something right now; maybe this is something worth trying," there's a bunch of frameworks out there.
Microsoft STRIDE is one that's been around for ages and they tend to have a lot of pre-canned questions and a whole bunch of stuff that the community has made online to make it much more accessible.
So highly recommend doing that for things that maybe feel really important.
Again, just like everything else is it something that you should do every single time, or set a cadence of doing it once per month? No, because frankly you probably have enough on your plates that you don't need to add on additional types of processes like that, but what I'm saying here is that when you have a really big project or maybe a really big integration with another system or maybe you're working on something that is quite high risk, that is a perfect opportunity to sit down and run through an experiment like that.
And then the last thing that I've seen be really successful in trying to just do your little part for security, is for those of you who are perhaps that person who can influence others on the team.
Whether or not you're the leader or you're the most senior person on the development team who has the most amount of knowledge.
But it all comes with again the same thing that all of us have been drumming on each morning it comes from that culture aspect: it comes from the small things that you do and that you practice as a team that really has that start to form about part of what your team is about.
Now there's a million and one different things that you could do and again it depends down to the type of team that you have.
Maybe you have a development team that is all of you are particularly interested and very nerdy about understanding and nitpicking about various different aspects of technology and how things are built.
Well, probably setting up some time where once a fortnight after your retro you have a bit of a discussion around breaking down some recent vulnerabilities, how did they happen, how did those things come to be, is that something that we should think about, what is there something from this that we could learn from, or how can we look at the response of how the organization responded to it and how that team responded, and reflect on those and embed them into what we do. So that could be something that works if that's something that interests your team.
They could also be from the aspect of talking about mistakes that were made because ultimately, at the end of the day, security and vulnerabilities are really terrible things to talk about because security is really all about, you know, keeping your data or your users feeling like their data is safe.
And usually, security happens when that trust has been broken, you know, their data has been lost, their accounts have been compromised, in some cases perhaps that impacts people from the money that they've lost, or the perception or the reputation they have, and ultimately, talking about that stuff can really feel terrible.
For those of you who maybe live or work in organisations that have that really positive, blameless kind of post-mortem review of your incidents, this is exactly something that you can incorporate that security element.
You might have pushed out something that opened up a vulnerability and caused you to have a security incident and not a bad time, well taking this back into your team and talking about the decisions that we made and the mistakes that we made but how we can change them going forward and opening up and being a little bit vulnerable within your team so that you can make better decisions.
So that's another way to build it up.
So many times within the organisations I've helped, I've heard teams say
"Oh, I was so stupid to do this" and
"I can't believe this person did that"
and that has no place in security because shaming people, and making people feel bad for the decisions that they've done is a negative security culture, and then nobody wants to participate.
So that could be something that your teams might find helpful.
Something as well as sharing the load.
Those of you who are the more senior person on your teams or perhaps do you feel that security is part of what you do, one would guess that maybe in the past people would just pass you the security tickets that come into your queue or "security wants to have a chat go talk to this particular person" and all of that work comes back to one person.
And what I encourage is to share the load.
Especially if you are the most senior person to tackle that ticket, sure, you might lead on it, but you might take someone else new along with you to really understand:
How did you troubleshoot that?
How did you dig into that?
How did you fix it?
Because if everyone learns a little bit and understands what their realm of influence is, they might feel empowered to then take on the next one.
So sharing the load with other people just means that it's not going to be just your job.
That it's a shared culture amongst the team.
And ultimately, it all comes down to just what everyone else has been saying: it's the culture change and the mind shift change that we can all do as part of our day-to-day jobs.
And it doesn't mean that it's going to have an overarching effect outside of your team instantly, but it does mean that the stuff that your team push out is probably a little bit more secure and ultimately kind of changing that culture and doing that little bit now will hopefully prepare you a little bit better for managing the Cyber Rain in the future.
Thank you!