In my research on the Dutch banking system, it became clear that the banks are seriously worried about social engineering. These techniques, such as phishing and identity theft, have become increasingly common. No reason for concern, right? Surely, a system upgrade, some stronger passwords, or new forms of encryption and all will be well again. Wrong! When it comes to social engineering, trust in technology is deadly. The solution, in fact, cannot be technological; it must to be social.

The term social engineering has been around for decades, but in the last couple of years, it has been popularized by famous social engineer Kevin Mitnick.  In the book Social Engineering: The Art of Human Hacking by another famous social engineer, Christopher Hadnagy, social engineering is defined as “the act of manipulating a person to take an action that may or may not be in the ‘target’s’ best interest.” This may include obtaining information, gaining computer system access, or getting the target to take certain action. Kevin Mitnick pointed out that instead of hacking into a computer system it is easier to “hack the human.” While cracking the code is nearly impossible, tricking someone into giving it to you is often relatively easy.

Countering these social engineering techniques tends to be difficult. As a result, banks are hesitant to contact their clients. Contacting the client means using media and this usage fosters trust in these media. This trust proves devastating to the banks, but is a nirvana for social engineers. As PJ Rey states in his essay Trust and Complex Technology: The Cyborg’s Modern Bargain, “it is no longer feasible to fully comprehend the inner workings of the innumerable devices that we depend on; rather, we are forced to trust that the institutions that deliver these devices to us have designed, tested, and maintained the devices properly.” Doug Hill builds upon Rey’s statements pointing out that our trust in technology applies to the people who use them as well as the people who have created them. In short, banks trust in their technology just as much as the employees and clients.

It is not hard to find tons of examples in popular discourse on the faith people have in technology. Every new piece of hardware or software is better than the previous one and will solve problems and tricky situations. However this blind trust in technology results in sophisticated invented scenario’s created by social engineers. An example would be pretending to be a computer helpdesk operator, randomly calling employees of a company and claiming that somebody of their department called because there is a problem with one of the computers. Chances are that at some point an employee will say yes and fall into the trap giving his or her password to the social engineer.

It goes without saying that the trust people have in technology is not the only factor in the equation. However, unexamined trust seems to be the big pitfall. If banks want to counter social engineering, they need to realize that this will not be done by merely upgrading password encryptions or other technological aspects of their security system. Further trust in technology will not remedy the problems that trust in technology created. Instead, the social side needs to be taken into account. The question we  are asking is: How can we make people more critical (i.e., less trusting) about the dangers revolving around technology, especially when it involves their own wallet?

In response to a BBC article on how hackers outwit online banking identity security systems, security technologist Bruce Schneier presents the solution of authenticating the transactions we make (similar to credit cards). Although this sounds like a shift away from a technological solution, seeing it is more about the transaction behavior of the client, this poses other dangers. Back-end systems monitor suspicious behavior. An example would be that if a client from The Netherlands signs from, say, Bulgaria, the situation is conceived as suspicious. This situation would add points to a risk score. If the risk score gets high enough other means of authentication come into play such as a telephone call to the client.

In authenticating the transaction the question needs to be answered if the transaction makes any sense with regards to the financial behavior of the client. This, as always, raises many questions on surveillance. The banks will need to know what your behavior is if they want to establish what suspicious behavior is for a specific client. Although banks probably wouldn’t mind this solution, but it feels like a violation of privacy from a client’s perspective. However, if we all don’t start being more critical, these sorts of invasive authentication scheme may soon become a reality.

Samuel Zwaan (@mediawetenschap) is a teacher and student in Media Studies at Utrecht University