Before we start let’s introduce a few common terms relevant to social engineering:
- OSINT; Open Source Intelligence, data gathered from publically available sources such as the internet.
- Phishing; attempting to gain access to information (e.g. passwords or credit card details) by posing as a legitimate entity over email.
- Pretexting; a social engineering technique used to gain sensitive data by lying about your identity.
- Scraping; a technique used to harvest data from websites (e.g. lists of users).
- Social Engineer; an attacker who uses manipulation as their primary means to achieve their goal.
- Vishing; attempting to gain access to information (e.g. passwords or credit card details) by posing as a legitimate entity over the phone.
So, on to the big question; what is social engineering? Simply put, it’s the clever manipulation of the human tendency to trust and doesn’t necessarily rely on using some, to use hacker parlance, ‘1337’ (leet/elite) hack.
Social Engineering tests, unlike a traditional penetration test which focuses on finding security flaws in software, focuses on the ‘wetware’ component of an organisation; the people. After all, attackers know that sometimes the easiest way to gain access to a computer system is to simply go after the user – why waste time trying to crack a secure password when you can just ask for it instead?
So, an example; we’ve all seen that bit in a movie where some guy walks into an office wearing glasses and carrying a few bits of IT equipment haphazardly stuffed into a bag. He heads straight for the CEO’s office and settles himself in, claiming he’s been called in to fix a fault. A few keystrokes later the ‘fault’ is fixed, and he walks out with a USB stick loaded with the CEO’s password and the design documents for the company’s newest product. Two days later the designs are on the internet and everyone is making their own version, and the ‘repairman’ is still able to access the CEO’s account whenever he wants.
Hollywood, right?
Unfortunately not - it really can be that simple. For example, just take a look at the movies Takedown or Hackers, or the TV series Mr Robot and White Collar, and at the first few seconds of this clip posted by CNN Money on YouTube: https://www.youtube.com/watch?v=PWVN3Rq4gzw.
Normally this is where numbers around the frequency of this kind of attack, or likelihood of successful compromise, are introduced, but social engineering is hard to pin down. There’s no electronic trace for some of these attacks the way there would be with hacking or phishing, and if it’s done well you may not even realise you’ve been socially engineered.
As a company legally employed to perform social engineering, Context are bound by laws and contracts which limit the amount of time we devote to an intrusion attempt and what methods we are allowed to use; for example no lock picking, or impersonating existing companies or law enforcement. And yet, our success rate is roughly fifty percent across both physical intrusion and vishing engagements (phone-based phishing), the two most common types of social engineering engagements we perform. Now imagine what someone without the same constraints could achieve.
This is the reason for this blog post; to show you that focusing on securing your technology isn’t enough to keep your data secure. Physical and procedural security (procedural being the processes and rules users are required to follow, such as not sharing login details) is just as important, as is the awareness of the people in your organisation. Because, to an attacker, everything and everyone is part of the available attack surface.
Breaking it Down
Whether legally testing a company’s physical and procedural security, or approaching it as an actual attacker, the rough anatomy of this style of attack breaks down into two distinct phases:
- Reconnaissance
- The Attack itself
Reconnaissance is aimed at identifying any information which will help determine the best plan of attack and support the attacker’s story if challenged. This might include identifying company employees on social media sites such as LinkedIn to either target or impersonate, looking for floor plans or visuals on sites such as Google Maps, which will help identify a way in without setting foot near the building, finding pictures of employees wearing their ID badges on the company website and finding phone numbers and email addresses to target using attacks such as phishing and vishing. It may also include simply sitting outside the building, for example in a nearby coffee shop, to watch people coming and going, observe what sort of physical security is in place and to look for visible ID that can be faked using a card printer.
With this information gathered an attacker can then determine how they want to actually approach their attack. If physical security is lacking they might simply try walking in through the front door, otherwise they might try to find a way to get access to the network through phishing or vishing. Alternately, they might use a combination of these methods to register themselves as a visitor so that when they arrive they’re given a pass and shown right in, or pose as a repairman who’s been called out to fix something urgently.
Let’s take a look at some scenarios, using fictional names and companies, which are based on Context’s experience in performing social engineering engagements.
Case Study 1 – ACME Co.
The target is the office of a medium sized legal firm, situated on the third floor of a building shared with several other companies. Alice, a hacktivist with a grudge against the company, wants to get inside and steal documents that will be embarrassing if leaked to the public.
She starts by doing a little online digging, looking for anything that might be useful when she tries to get inside. It doesn’t take her long to scrape the company website for a list of email addresses and phone numbers linked to specific departments and to gather some photos that show the office space on the third floor, including the fact that every desk has a computer on it. Next she uses LinkedIn to gather a list of ACME employees, and then searches social media sites such as Twitter, Facebook and Instagram for more details on those employees. Finally, after a quick google search she also finds the website of the architect who designed the building, which shows the floor plans of each floor.
Next Alice spends a few days watching the office building itself, which is situated close to a busy shopping complex so she doesn’t look out of place as she wanders back and forth with shopping bags or playing on her phone. She’s actually checking out the security and taking photos on her phone, noting the fact that everyone appears to arrive at around 8:55am each weekday morning and leave at 5pm each evening on the dot, and that most employees walk across the road to the shopping centre at 12:30pm, lunch time, wearing their employee ID which she now has a clear picture of.
From a café ideally situated across the street she can look through the glass-fronted atrium and watch employees using pass-controlled turnstiles to get to the lift lobby, and can also see plenty of people in high-vis jackets, who appear to be working on the floor below ACME, using a separate door to get to the lifts – and this one doesn’t appear to have any form of security measure in place, but is situated right next to the security desk.
After a few days spent watching the building Alice is satisfied that she has enough information to make her attempt. She uses a badge printer to print two ID badges – one to match the employee ID she’s seen the ACME staff wearing, and one that identifies her as a generic building inspector.
Monday morning she puts on a suit, much like those that ACME staff wear, packs her fake passes into her handbag and sets off for the office. She arrives at 8:55am, flashes the building inspector pass at the building receptionist and walks through the door she’s seen the contractors using, joining the back up with the mass of people arriving for the day and waiting for a lift. The group is big enough, and constantly shifting as people get into lifts and more arrive, that no one notices as she patiently waits for someone else to call a lift for the third floor. When they do she moves through the crowd to join them, carefully slipping her Inspector ID away and replacing it with the fake ACME ID instead, bringing her phone up to pretend to be deep in conversation.
The third floor itself is a partial unknown – Alice has the floor plans but doesn’t know what security measures she’s going to find. So when the doors open and her companions step out she follows them closely, continuing her charade of being on the phone. One presses their card against a reader to get into the office and, looking mildly irritated, holds the door open to allow Alice and the others through without needing to use their own ID. So far so good, no one seems to notice that she doesn’t belong.
Once inside Alice ‘hangs up’ on her call and looks around for an empty desk. Most seem to be home to a number of personal items, suggesting that she might be challenged if she sits there, but she spies a bank of desks with no items to one side of the room and makes her way across, smiling and exchanging brief pleasantries with anyone who looks her way. Again no one challenges her, and she’s able to boot up the computer on the desk without needing her pre-prepared spiel about being a new employee without an assigned desk. Instead she sets about trying to guess the username and password to log into the computer by using the names she identified using LinkedIn, compiled into the common ‘firstname.lastname’ and ‘first initial.lastname’ formats, and a list of a few commonly used passwords; it doesn’t take her long to guess one correctly using this method. Once she’s in she spends a little time digging around for the information she wants, copies it onto a USB stick, packs up and leaves without a word.
Later that afternoon the stolen documents appear on the internet, and finally someone begins to realise that the unfamiliar woman in the office that morning may not have been an employee after all.
Case Study 2 – Umbrella Corp.
This time the target is a large multi-national organisation with offices all over the world, and Bob is a financially-motivated hacker who wants to sell their secrets to a competitor but has been unable to find a vulnerability in their computer security that he can exploit.
So instead, like Alice, he decides to try and socially engineer his way into getting what he wants and starts by doing some online reconnaissance. He compiles lists of employees, departments, names and email addresses from the main Umbrella website and from various social media sites. He isn’t, however, able to find pictures of the office (beyond an overview from Google Maps), or any details that tell him what physical security measures are in place at the offices in the nearest cities.
Again, like Alice, he also spends some time watching the offices themselves, picking the two offices that are closest to him. After a few days at each site he’s seen enough to know that getting physical access is going to be difficult – the offices are both situated in fairly open spaces, with fences around the perimeter, a guard checking passes on the main gate, and pass-controlled man-trap doors for all building entrances to prevent tail-gating. He could try and come up with a story to get in, but he feels that the risk of getting caught is too high.
Instead Bob decides to try to gain remote access to the internal Umbrella network – the company is large enough that there is almost certainly some sort of VPN (virtual private network) solution that employees can use to access their data when they are not in the office. He plans to do this by phoning to Umbrella’s IT helpdesk and impersonating an employee who is out of office and having trouble with the VPN. To support this story he spends a little more time researching possible targets on social media before making his attempt, and with a little bit of digging he finds a few employees who, according to their social media statuses, are either on holiday or travelling for work and therefore make ideal targets.
Armed with these names, and various details he was able to acquire online such as their birthdays, likely base offices and job titles, Bob phones the number for the UK IT helpdesk and identifies himself as Ryan Jones, the first name on his list. He explains that he’s on holiday but has to check a document on his desktop and unfortunately the VPN isn’t working.
The helpdesk operator offers to look into it, and then after a few moments explains that there’s no reason that it shouldn’t be working, and in doing so mentions the type of VPN solution in use. Bob asks if it’s possible that the password has expired, but the operator points out that as the certificate was only issued last week that’s highly unlikely. Unwilling to push his luck any further Bob hangs up, waits a few minutes and tries again, knowing that having a single helpdesk to manage all of the UK offices means he’s unlikely to speak to the same operator twice.
This time Bob pretends to be James Phillips, and again leads with his explanation that he’s on holiday but needs to look at a document urgently and can’t seem to access the VPN. This time the operator is a little more sceptical and asks for James’ employee number, which Bob doesn’t know so he mutters about having to remember it and says he’ll call back in a few minutes before ending the call.
Again he waits a before trying again, getting through to a third operator and giving the name Evan Smith. He tries his story again, this time adding an air of frustration at having to do this while on holiday and then apologising to the poor operator because it’s not his fault, trying to play for sympathy. It works, and the operator asks for his username so he can do a quick check to see what the problem is, which turns out to be that Evan doesn’t appear to have an active VPN certificate. Thinking quickly Bob professes his surprise at this and wonders out loud if it could be because another E. Smith left the company, could IT have deleted the wrong certificate by mistake? The operator agrees that’s probably the case and quickly offers to email over a new certificate and text the password. Again explaining that he’s on holiday, and that his wife will kill him if she sees him using his work mobile, Bob manages to get the operator to email the VPN certificate to a newly created personal email account instead and, once he’s confirmed he has the information he needs, thanks the operator and hangs up.
Now it’s a simple case of downloading the correct VPN software, which he knows from the first call, and using the new certificate and password to get access to Evan Smiths corporate desktop. From there Bob can easily gain access to other areas of the Umbrella network and the files he is looking for.
A few months later, when Umbrella’s competitors start to steal their clients and bring competing ideas to market, the stolen data is finally traced back to the compromised account which belonged to an office-based worker who had never requested remote access.
What Went Wrong?
Unfortunately, while the above scenarios might seem to have gone too smoothly to be true, they are actually far from uncommon. Take the ACME tail-gating incident, for example. This exploited most people’s reluctance to be confrontational, or to interrupt someone who is on the phone, and the subsequent friendly behaviour once she was in the office gave staff the impression that Alice was in fact supposed to be there. There’s a perception that someone who sneaks into the office is going to be obvious, acting noticeably suspicious, but this is not necessarily going to be the case. Whatever the reason, it provided the perfect opening and is not as uncommon a finding as you may expect.
Successful social engineering relies on the manipulation of human behaviour to achieve a desired outcome. Playing on sympathy or comradery to avoid security checks, making someone feel like they’ve done a good deed when they help out someone who appears to be having a bad day, making them feel too awkward to challenge someone even though they’re sure that person isn’t supposed to be there, or even using a sense of fear or urgency to get the desired result; these are all tools in a social engineers arsenal.
How Do We Make It Right?
So what can be done to mitigate against these points of failure? Consider all of the points, and what the common factor is: people.
As previously stated, most people are generally reluctant to close a door in someone’s face to prevent tailgating, or to interrupt someone on the phone to challenge their identity. Part of this is because we are taught to be polite, to hold open the door for someone (particularly if they are holding too much stuff to do so themselves, a trick often used in social engineering), but it’s also the fear of reprisal if you challenge the wrong person. In many cases the worst security offenders are the most senior employees, citing reasons such as being ‘too busy’ to bother with a long password, or relying on the fact that people know them to exempt them from wearing visible ID.
This means that the single biggest factor that can help guard against social engineering attacks is user awareness, which is talked about in depth in a previous Context blog post.
A few other simple, but key points to improve security are:
- Ensure buy-in from all levels of the company, from the top down;
- Make sure you have a strong security policy in place across the entire organisation, and that all employees are aware of the policy and their responsibilities regarding it; and
- Encourage employees to challenge anyone without visible ID, and to actively prevent tailgating, without fear of reprisal.
In addition to the above, Context tends to find the same general issues each time we perform a social engineering engagement. To help you protect your organisation here are some of the key recommendations we often provide:
- Implement multi-level verification for anyone requesting access to secure areas or IT systems, which should always include a request for a detail which should not be available online. Consider, for example, the hoops you are required to jump through when using telephone banking to verify that you are who you say you are;
- Make sure there is a robust policy in place to verify that any visitors to the office are properly identified and have authorisation from an existing member of staff who can verify the reason for the visit;
- Implement a well-defined policy for dealing with anyone who is suspected of being present without authorisation, such as alerting security and ensuring that the suspect is accompanied at all times until their identity and authorisation can be verified;
- Make sure computer monitors are angled, or external facing windows are tinted, to avoid the contents of the screen being read (for example using camera zoom) from street-level or neighbouring buildings;
- Consider using a solution to prevent unrecognised devices, such as laptops, from connecting to the corporate network from any desk or meeting room;
- Conduct social engineering engagements and awareness training to assess your risk and to raise user awareness about the threat of social engineering, including tactics such as phishing and vishing;
- Encourage employees to remove their ID when they leave the building, making it more difficult for social engineers to try to obtain, or design a credible fake; and
- Install and maintain anti-virus software, firewalls and other intrusion protection and detection systems on corporate computers and networks to reduce the impact of a successful phishing attack against employees.
Regarding the threat from phishing and vishing specifically there are a lot of resources available, which provide good advice on protective measures; rather than repeat them here are a few helpful links:
- https://www.contextis.com/blog/user-awareness-important-tool-protecting-your-organisation-cyber-threats
- https://www.us-cert.gov/ncas/tips/ST04-014
- https://securingthehuman.sans.org/newsletters/ouch/issues/OUCH-201701_en.pdf
- http://www.techrepublic.com/blog/it-security/back-to-basics-defending-against-phishing-attacks/
- https://us.norton.com/internetsecurity-online-scams-how-to-protect-against-phishing-scams.html
Summary and Conclusions
With the exception of the risks posed by phishing, the threat of social engineering as a way to gain access to corporate IT systems seems too often be overlooked in favour of securing those systems against computer-based attacks. Hopefully this post has demonstrated that the key to securing your organisations IT systems lies in preparing for all avenues of attack; locking the front and back doors and all the windows is no good if there’s a hole in your roof, after all.
People are a vital and often overlooked component of a security system, part of the attack surface available to an attacker, and without proper awareness and training may be the weak link that exposes your data to the world.
References
- https://www.forbes.com/sites/ciocentral/2011/11/03/humans-the-weakest-link-in-information-security/#2a44c705de87
- https://www.symantec.com/connect/articles/social-engineering-fundamentals-part-i-hacker-tactics
- https://www.us-cert.gov/ncas/tips/ST04-014
- https://blog.malwarebytes.com/101/2016/01/hacking-your-head-how-cybercriminals-use-social-engineering/
- CNN Money (https://www.youtube.com/watch?v=PWVN3Rq4gzw)
- https://www.contextis.com/blog/user-awareness-important-tool-protecting-your-organisation-cyber-threats
- https://securingthehuman.sans.org/newsletters/ouch/issues/OUCH-201701_en.pdf
- http://www.techrepublic.com/blog/it-security/back-to-basics-defending-against-phishing-attacks/
- https://us.norton.com/internetsecurity-online-scams-how-to-protect-against-phishing-scams.html