The Combitech blog
Our experts blog about trends and insights from Cyber Security and digitalization. They share their experience from the forefront of technology.
Our experts blog about trends and insights from Cyber Security and digitalization. They share their experience from the forefront of technology.
Do you have a step counter on your mobile? Do you use an app for setting goals and following up your training? Do you look at how an article or a service is ranked before you click Buy? Have you and your manager set up measurable objectives for your work this year… objectives that might affect your salary or bonus? Many of us are nearly obsessed by measuring, ranking and evaluating – both in our private and professional lives. Cecilia Unell, business developer and enterprise architect at Combitech, reflect on the art of measurement.
Chris Dancy, who is often called “the most connected man on Earth”, was interviewed by Ny Teknik in March 2019, where he explained how he gathers data on both himself and his surroundings. It includes everything from pulse and blood pressure, to planned and finished activities, to the sound and light levels in his surroundings. He gathers all this data into an “internet of Chris Dancy” and automates it so that he gets an alert if he exceeds certain values, for example if he hasn’t moved enough lately.
During the interview Dancy touched on the fact that we sometimes get caught up in measurement itself; we’re measuring and evaluating without actually knowing if we are measuring something important or not. He captured it this way: “We haven’t learned to measure what we care about, so instead we are caring about what we measure.”
The philosopher Jonna Bornemark tackles the same theme in her book Renaissance for the unmeasurable – Making a deal with global pedantry. She examines how measurement and evaluation in themselves are sometimes held to be more important than the reality being measured. Her book takes up cases from healthcare where zealous measurement and slavish conformance and follow-up of established processes and routines will risk that we get poorer, not better, healthcare.
What I have experienced in professional life is that we seem to operate under a management culture that is close to obsessed with measurement and evaluation. SMART goals, key ratios, KPIs and balanced scorecards – what’s next? It’s easy to understand this desire to measure things; it’s hard to get to where we want without setting goals, and hard to know if we’ve arrived in the right place if we can’t measure and follow up. The problem is just that sometimes it’s hard to measure what we actually want, and we have to settle for measuring one or more things that are easier to check – something that’s related to what we actually want to measure. This is the difference between direct and indirect measurement methods.
Traditionally, the most important key figure for a company is its profit. This is a good example of a direct measurement method. Most companies would also like to assure future profitability by being good at innovation – developing radically new services, products, business models and so forth. But what’s the best way to measure and evaluate innovation power or potential? This is where we need indirect measurement methods.
Can we measure our innovation level by the number of ideas that our esteemed colleagues drop into a suggestion box? Surely that’s connected to innovation? Well yes, it might be, but what good is that measurement if it turns out that half of these ideas are irrelevant to the company? Maybe it’s better to measure the number of innovations that we launch in the marketplace each year? Or perhaps the number of successful (profitable?) innovations each year instead? The question is whether we will know after a year if a bright new innovation is successful or not. And how can we best measure the learning that pulses through the entire innovation process, regardless of whether the resulting innovation in itself is judged to be a success or not?
So measuring innovation potential requires us to rely on several different indirect measurements. Moreover, these need to be combined and weighted in a well thought-out way, or we’ll otherwise risk spending our time measuring something that isn’t actually important, and that we’ll follow up and guide our activities based on measurements that don’t lead us to our objectives.
Besides being hard to identify what we actually want to measure, and then guide our actions in accordance with those measurements, in our digitalization era we have the problem of excess data. We have an incredible potential to measure and gather data without great difficulty, and in a very short time. For example, we can measure how many transactions or inquiries a customer service representative handles daily, how many products a machine can spit out in a given time, and so on.
How can we handle this and navigate through the large amounts of data coming from our measurements? One way is to seek assistance from Artificial Intelligence (AI) to discover patterns and relations between different measurement points – patterns that would be difficult or impossible to discover on our own. For example, several studies have shown that AI is better than healthcare specialists at identifying different types of cancer. These studies involve digitalized tissue samples, where an AI program is trained to detect patterns that indicate cancer in the sample.
But whether you use AI for identifying different patterns, or not, be cautious. Finding a pattern, or relationship, in the data is not the same thing as finding a cause-and-effect relationship. The fact that most engineers in a company are men doesn’t mean, for example, that men are more appropriate for engineering roles than women. This clear relation between gender and role is not caused by gender identity in itself, but by other factors.
So how are we going to measure “the right way”? There’s no simple answer to that question, unfortunately, but here are some fundamentals:
SMART stands for Specific, Measurable, Achievable, Relevant/Realistic, Time-based
Ottsjö, Petter (2019) Ny Teknik, interview with Chris Dancy (Swedish; paywall), https://www.nyteknik.se/premium/sa-blev-han-varldens-mest-uppkopplade-man-6950145
Bornemark, Jonna (2018) "Det omätbaras renässans – en uppgörelse med pedanternas världsherravälde" (Title translation: Renaissance for the unmeasurable – Coming to terms with global pedantry), Stockholm: Volante. ISBN 9789188659170
Tucker, Ian (2018) “AI Cancer Detectors" https://www.theguardian.com/technology/2018/jun/10/artificial-intelligence-cancer-detectors-the-five
Towers-Clark, Charles (2019) “The Cutting-Edge Of AI Cancer Detection” https://www.forbes.com/sites/charlestowersclark/2019/04/30/the-cutting-edge-of-ai-cancer-detection/#2afa1a347336
Johan Thulin, senior consultant in Cyber Security at Combitech, reflects on how security can become a natural part of projects. As well as how an organization can develop a product or system where security doesn’t get in the way. Can Design Thinking contribute to a solution?
Security is one the biggest challenges in a connected world. We are constantly reminded of this by any number of publications that describe attacks against connected devices. At the same time my colleagues and I – who work with information security – often have knowledge of how to make that system secure, or prevent the attack that was described in a given article. So how can it be that security remains a large challenge and a limiting factor, one that sometimes even limits our abilities to fully utilize the potential of digitalization?
There are naturally many reasons, and I’m not going to claim that I have all the answers, neither about the nature of the problem nor the solution. I have no silver bullet, and I’m certainly no Arya Stark in Game of Thrones, who can dispatch an army with a single thrust of a blade. Be that as it may, I do think that the pathway to a solution is that we discuss and expose the problem.
I’ve often wondered why security is so seldom baked into projects right from the beginning – leaving security to be something that pops up at the end because some regulation requires it. Naturally, there are several reasons. Sometimes it’s because the project owner avoids touching security issues until he or she is absolutely forced to. But often I think it’s because we simply choose a simpler solution and just point towards a collection of security requirements, such as in a standard. I’ve even heard security experts ask very early in a project to see a complete system description, or ask if an information inventory has been conducted… neither of which exists so early in the process, of course.
We security experts tend to be seen as – and to take on the role of – auditors, not as constructive problem solvers and system builders. I believe we must all be better at adapting ourselves and working under existing circumstances. Early in a project, it might be sufficient to establish the things that are so obviously good to have, such as encrypted messaging, or to authenticate all users, and so forth. And then as the solution develops and takes form, we can add details and make things more concrete.
As I see it, we security experts have to work at finding solutions that enable, rather than limit or constrain. That’s what I often hear from customers… that they don’t want that absolutely secure system that will withstand every possible threat and problem. What they want instead is a system that is adapted to the organization’s true threat profile.
For my work, in order to inject security early on and create engagement, I recently tried working with a design team that uses the Design Thinking method. This proved to be a way to get security ideas into the early design phase.
The method builds on three points:
Our first challenge concerned getting users to describe which security functionality they actually wanted. We chose to survey users’ needs with the help of User Journeys and discovered that it was rather easy to complement these with statements about what the users didn’t want to happen. For example, if users say they “want to use their mobile phones to open and start their cars”, we could build out this statement with “only my own mobile phone can open and start the car”. The functional requirement then also became a security requirement. This creates a function that users actually think is important, and can relate to, even if it’s nothing that they would instinctively mention – probably because they consider it obvious.
To achieve this kind of dialogue with users, we quickly realized that there are communication challenges. My colleagues and I are used to talking with security experts and developers using words like “requirement satisfied” or “authentication token”. We had to re-school ourselves, so that we could understand each other and have a constructive dialogue.
What I also noted was that requests or limitations often had to be stated at a high level for users to be able to relate to them. We could seldom formulate a solid, clear security requirement directly based on user requests, because then we would have a big heap of them. So we utilized a middle ground, where we formulated one or more security objectives based on the functionality desired by the users. These objectives can be viewed as high-level requirements, or just as objectives, formulated in a way that is understandable to users, and that explains the purpose of the security functions we later will have to develop.
The principal of Design Thinking is to understand the true problem that a product should solve, and then use an iterative process and prototypes to test , and find the best solutions. I believe this method should be useful in identifying better and more effective security solutions. It’s about finding that happy medium, where the users’ requirements and desires also mean something.
It might be possible to build a system that is completely secure, but at the same time is completely unusable, or a system that is extremely easy to use, but isn’t secure at all. But I would like to challenge that ordinary view that you have to compromise. Can’t we find a third way and a common mindset, where security experts contribute to more and better functionality, as well as value beyond just security? We have to ask ourselves what we need to carry this out, and for my part I believe it’s about being better at identifying threats and risks. What do you think?
Would you like to learn more?
Johan Thulin supports and advises Combitech’s customers on information security. At Paranoia 2019, the Nordic cybersecurity conference, he and Tina Lindgren,senior consulant in cyber security and phD in information theory, presented on creative thinking in security work. For further information, see contact cards.
A while ago, we arranged Hackaway day – a day when we tried to hack things to learn more. This day, we focused on smart speakers. But, what does it really mean to hack something? The answer to that depends on who you ask.
In our case, we look for ways to use a product or a service, that make it possible for unauthorized people to get information or access that they shouldn’t have. During Hackaway day, we did not only test the speakers’ built in security by performing so called penetration tests, but we also looked closer at how users and manufacturers can reduce security risks by using the correct settings. It turned out to be a very informative day, as we found several ways to get both information and access.
We started off investigating what information in the speakers that was possible to get hold of via interfaces and applications. We established quite quickly that it is possible to access basically any information in the speaker if you are connected to the same Wi-Fi. Thankfully, speakers do not have a lot of sensitive information, but we also managed to access the Wi-Fi password and a few PIN codes. This might not seem so dramatic, but if this information ends up in the wrong hands, it opens up for further security threats against devices on the same Wi-Fi and if you have used the same PIN code for sensitive services.
The next step was to see if we could get the speakers to restart, and in that way return to its very first start. We call this the “first time” process. If you are able to do this, it is often possible for unauthorized people to take control over the speaker. This is not possible if you need to login on the web interface that often are used for this.
If the speakers are able to use the WPS, you should avoid this since it makes your speaker more vulnerable for attacks by someone that is nearby.
We also looked at the updating process. It is very important that you update your speaker when new updates become available. If the software isn’t updated, the speaker will always become vulnerable sooner or later.
Updates can also be a risk. If an unauthorized person sends a false update to your smart speaker, this person can also control the speaker if you use the update. Through an externally controlled device in your network, it is possible to attack other units in the same network or create a botnet consisting of several speakers that can be used for large attacks (see for example https://krebsonsecurity.com/2018/05/study-attack-on-krebsonsecurity-cost-iot-device-owners-323k/).
As a user you also need to question the user rights that some applications connected to the speaker have. Is it really necessary that the application needs access to text messages, calls, GPS, camera or microphone?
Hackaway day went by fast and we found several things to dig into, but to sum up, I want to share some tips that reduce the risks with smart home speakers:
For the eleventh consecutive year, the largest cyber security conference in the Nordics, Paranoia, was recently arranged in Oslo. The conference was an opportunity to listen in to some of the leading experts in the industry, such as Bruce Schneier and ethical hacker FC. As usual, Paranoia was also a meeting place to discuss current cyber security issues. From this year’s conference there are above all three things I take with me:
We must accept the fact that we can’t only build walls and hope that no one succeeds to break in. Like Bruce Schneier said in his presentation – the internet was not designed with security in mind. The large networks we have today are not possible to secure from breaches and therefore we have to assume that we will, or already has been, hacked.
Cyber security expert Mikko Hypponen shared a parable of a bank vault without motion detectors on the inside. If someone manage to get into the vault this person is free to grab anything he or she can, and the same goes for our IT systems. Therefore, we need a strategy for detecting and responding to breaches. Rob Wainwright, previously director of Europol, also talked about the importance of gathering and analysing data in order to maximize knowledge about the threats we face. The more knowledge we have, the easier it will be to act efficiently in the security work.
The past year we have seen many examples of attacks where organizations that are not the main target ends up as victims to collateral damage. We also need to bear in mind that today even nation states are sometimes initiating attacks, something that we likely will see more of in the future. Therefore, we need increased cooperation between organizations and countries, but this needs to build upon trust – and the question is, who can we really trust?
There is a lack of resources within cyber security – in terms of competence as well as efficient tools. This is no news, but during Paranoia we got to listen in to discussions on the issue from both EU and US perspectives. Several speakers, such as Rob Wainwright and Bruce Schneier, emphasised that the lack of competence already reaches hundreds of thousands people, even millions depending on which regions in the world you include.
How to deal with this shortcoming was a recurring topic during Paranoia. On the one hand, we need to attract more people to the industry through, for example, education and gamification of the recruitment process. On the other hand, we also need to start using automation and AI to make jobs more efficient and easier. In the future, I think we will get the best results if we combine human with AI.
Cyber security is not just about privacy – cybercrime can physically harm people unless we help protect the critical infrastructure. Today, cyber security is a must for a society to be safe. Håkan Buskhe, CEO of Saab, talked about the importance of companies being aware that they are also part of a country’s cyber security puzzle and that through their security work they support critical societal functions.
There is a lot happening in cyber security and this year's Paranoia gave me – and certainly all other participants – many interesting insights and things to ponder. Thanks to everyone involved for a brilliant conference, see you at Paranoia 2019!
In the latest episode of the Combitech pod, I and my colleague Johan Thulin report directly from Paranoia. We interview Håkan Buskhe, CEO of Saab, ethical hacker FC and Dr. Jessica Barker, security expert focusing on the human part of cyber security. Listen to the pod here.
When it comes to creating a solid security culture and working effectively with cyber security, knowledge is key. Successful cyber security and focusing on the right things requires a comprehensive understanding of how a cyber attack can happen.
Attacks are often carried out using a two-pronged approach known as social engineering, targeting IT technology while also exploiting human factors. Attackers often have a good idea of how things work at the company they’re targeting.
They tend to make use of one or more vulnerable areas in a company’s technical security set-up that lack up-to-date protection, in order to access the IT system. They can then try and get an employee to trigger a technical weak spot, for example by clicking on a link in an email, taking them to a website or using unauthorized USB memory sticks.
The person carrying out the attack can also exploit the fact that the company is not protecting its information properly. Here are a few examples:
The attacker can also use the fact that employees are consciously or unwittingly careless about how they handle sensitive information. For example:
Protecting yourself is about being aware of the threats and risks and deciding what information most needs protecting. Once you’ve done this, you’ll be able to make smart choices and take appropriate action. It’s important to remember that technology only solves part of the problem. Ultimately the best way to protect your data is to provide training for everyone in your organization to make sure they are security-savvy.
Feel free to add your thoughts on this subject in the comments field. Cyber attacks can come in many different guises, which is partly why cyber security is so challenging and fascinating. Here at Combitech we’ve taken on the challenge of sharing knowledge about #CyberSecurity and raising security awareness in Sweden.
For example, one in three companies or organizations in Sweden have not ensured that everyone is aware of their security policy. We want to change this.
The film above presents the collaboration between Combitech, Securitas, and Microsoft. The collaboration aimed to effectively study how a digitalised solution – in this case with the help of Microsoft's HoloLens – could be applied to assist a security guard in their workday.
The concept of Augmented Reality has been around for a long time and in various forms. It involves using eyewear and mobile devices to overlay an augmented, three-dimensional image on top of the real, visible world. The result is frequently astonishing and impressive. But, notwithstanding the novelty, the challenge lies in identifying which benefits the technology is able to offer. Superimposing pretty images over reality is rarely enough to provide real-world benefits. This is the point Daniel Akenine (CTO, Microsoft Sweden) is trying to make when he asks: ”What can you do that you cannot do already, better, with other technologies?”
In this project, instead of focusing on end-user needs, we concentrated solely on the technology. A basic idea that emerged was the possibility for a security guard to detect problems which are otherwise easy to miss, and to provide support to guards during the performance of complex actions. This is what Yacir Chelbat-Persson (Innovation Director at Securitas) means by ”Augmented Security”.
This reasoning changed how we viewed the technology. The ability of the HoloLens device to scan its surroundings and detect changes therein demonstrated greater benefits than the visualisation of 3D models. Instead, it was about producing the simplest possible visualisations for the sake of clarity and not to distract the user unnecessarily.
Combitech's role in the project was to integrate the functions into HoloLens, and to offer expertise on HMI, which creates a sense of professionalism with respect to how AR support should be structured in order to function advantageously in stressful situations. And our legacy of effective decision-making support for fighter aircraft pilots proved especially valuable in this.
Combitech's extensive portfolio also includes diverse solutions on how the entire chain can be connected. For example, how premises and buildings can be scanned, and how information detailing actions or steps can be integrated into the system – information which, normally, can only be found hidden away in documents or is known only to a few experts.
Prospering through this digital transformation requires end-user benefits to be verified as early as possible, without being steered by the technology. The collaboration between Combitech, Microsoft and Securitas truly is an excellent example of this.
Finally, I would like to thank the team at Combitech Reality Labs: Björn Rudin, Tobias Larsson, Jonas Ekskog.
Not again! Isn't that the feeling, when the effects of the WannaCry incident are still fresh in the memory and yet the next attack is already rearing its ugly head? As usual, the attack relies on computers which have not installed up-to-date security patches, leading to the initial reaction "How hard can it be to update your system?!" However, it's not actually always so simple.
Many IT systems are connected to other systems for which automatic updates can pose a problem. Updating an IT system can consequently result in, for example, a stop in a production line if the effects of the update are not fully and completely understood first. In other words, it's not as easy as it first appears, and especially not when ever more systems and services are connected and integrated with each other. A single system might well be able to repel attacks – but it is critical that we protect the entire ecosystem, including all the various technologies and deliverables.
Take transport services, for example. A lorry has several different systems and even communicates with other lorries and systems while in transit. The information is analysed and processed in a cloud service, and then transmitted to partners. And while this is going on, you or I can follow its movement via an app on our smartphones. For all of this to work, the entire chain has to be particularly robust.
That's why I believe that the success of new regulatory frameworks, such as the forthcoming NIS directive, is a must. The purpose of the directive is, of course, to ensure that the functions critical to our society are robust and able to withstand different types of IT attacks. The introduction of the NIS directive will not solve all of our problems, but it will create better conditions for improving the capabilities of our systems and services to defend themselves against attacks, and will ensure that we are prepared in the event that something does happen. As everybody knows in times like these, it is not those with the highest defensive walls who win, but rather those who are best prepared to deal with various incidents.
Listen to our podcast for further information on the vulnerable society and the NIS directive. (In Swedish)
I probably shouldn't have been surprised when a friend asked me if we dare becoming sick in the future.
When you work with information security, you often get asked a lot of interesting questions by family and friends who may not be experts in the field and who have a somewhat hazy picture of what it is you actually do.
My friend had read DN's (a Swedish daily) excellent article on unsecure hospital systems (especially those still using Windows XP) and became very concerned.
It certainly is interesting that Windows XP is still being used, but I think we should focus on and call attention to what is often referred to as the next revolution: that we're on the way to connecting just about everything. I don't think that we fully understand what this means. Good evidence of this is that, over the years, we have become used to sentences like ”I found the control system to a crematorium on Shodan” or ”Radio-controlled pacemakers are not as difficult to hack as you might think”.
Fortunately, there are individuals in the EU who are also concerned about this – primarily because several of the connected systems belong to organisations which are critical for society to function. The EU has therefore decided on a directive, colloquially referred to as the NIS directive, which will be interpreted and legislated in all member countries. Including Sweden.
The new law will force organisations with functions critical for society to work systematically with information security and to make sure that they can continue to provide these critical functions.
This is, of course, a good thing, but I hope that the increased interest in information security generated by the NIS directive (and GDPR) will spread across society as a whole. That people will understand why information security is so important. So that we can dare to get sick in the future.
What do you think – am I naive or do you also believe that information security should be elevated in our consciousness? For those of you interested in the NIS directive, I discuss it with my colleagues Susan Bergman and Jonas Stewén in the latest Combitech podcast. Listen to it here. (In Swedish)
Are self-driving cars secure? What happens if terrorists manage to hack one?
I am asked this question often when speaking eagerly about my belief that autonomous vehicles are the future. Especially after the tragic incident in Stockholm earlier this spring. Many people assume that cars cannot be hacked at present.
I recently listened to a talk by the American security expert Charlie Miller at the Norwegian conference Paranoia, where he spoke about his research into vehicle security. Charlie Miller is well known for remotely hacking a Jeep Cherokee a few years ago, gaining control of the acceleration, braking and steering systems (inside the vehicle at the time was a journalist from Wired, whose article on this nightmare scenario can be read here).
Charlie Miller has repeatedly demonstrated truly shocking security shortcomings in vehicles from Jeep. When asked if he believed that only Jeep had these bugs, he stated that that was highly unlikely, but that he did not have the funding to test cars from other manufacturers.
The most common picture of cars and how they work is one that has prevailed since the beginning of the millennium. At that time, they were completely isolated systems and the most advanced thing was connecting a CD player to listen to your music. It is a mystery how this picture has persisted up to now when we are used to connecting our smartphones to the speaker system while having Google Maps open on the instrument panel.
Although, I think the automotive industry is definitely starting to get a grip of the situation (thanks to individuals like Charlie Miller) and most manufacturers are consciously working to improve information security in their vehicles. But I would like to highlight a few good principles that will help this work significantly:
But what if you're not a vehicle manufacturer (not everyone is) yet you are still concerned about your car being hacked? Firstly, we must concede that this is very uncommon – for the time being. For those who would still like to beef up security:
There is today a palpable shortage in competence and resources within IT and information security. A survey commissioned by the Swedish IT and Telecom Industries trade organisation shows that there now exists a deficit of 30,000 IT experts in Sweden, and that this figure continues to grow. One major reason for the growing demand for IT and cyber security personnel is the increasing number of cyber attacks, connected devices and the vulnerabilities brought to light by digitalisation in general. We face then a significant challenge in the future, as we need more people with the right IT security competence than are currently available.
So – how can we in the industry help to increase the number of experts, improve diversity and create a better delivery capacity and innovation within cyber security?
When we talk about cybersecurity in the context or organisations, personnel and access control, we consider three stages: before, during and after employment. Or joiners, movers and leavers. Can we think along the same lines when it comes to what we need to safeguard growth in our own industry and set better targets for ourselves?
Finally, I read a fantastic quote from Dr. Jessica Barker: "A lot of industry professionals wear their 20-30 years of industry experience as a badge of honour, and so they should, but the problems haven’t been solved in the past 20-30 years. So we do need fresh perspectives and new ways of thinking. That experience needs to be combined with new talent and new perspectives."
That quote hits the nail on the head. Last week Jessica addressed the attendees of Paranoia in Oslo, along with several other inspiring speakers. A shining example of an exciting arena where we in the industry can meet, grow and exchange experiences.
We live in really interesting times, with a pace of development that is completely unprecedented. All in all, this is marvellous, but it also means that an ever-growing number of people need to learn new technologies and work procedures. And this a positive thing, provided there's enough time for it. Which isn't always the case.
That's why I believe we need to start thinking more about where our own skills can have the most benefit. This may seem obvious, but owing to today's shortage of IT technicians – a shortage expected to become more pronounced in the months and years ahead – we have to make this a greater priority.
One area that has truly experienced an upsurge over the last few years involves managed security services, i.e. Security Operation Centres. By making use of a partner's expertise and services, these Centres are able to detect and respond to various security-related incidents. Together, they decide what is to be included and how to jointly handle these incidents. This can involve everything from different forms of cyber-attacks against organisations to creating a real-time situational overview. The crux of the matter is that this, much like everything else in a connected and integrated world, is something that must be done continuously, around the clock. For instance, it's not possible to shut down security operations over the summer, as it is during this time that threats are most severe.
This is where prioritisation comes in. If we want our own staff to manage these threats, considerable effort is required to retain skills within security and specific products. Or one can choose to entrust the help of a partner who is able to manage the company's security-related incidents and those of others. This means a partner is often better suited to the task than an individual organisation.
Although everyone has different circumstances, we all have the same problems. The current pace of development will, in all likelihood, make it difficult and resource-consuming for an individual organisation to address these issues alone, which is why I feel it's important to get a head start and determine how such a solution would work for the organisation. Naturally, there are a multitude of rules and regulations which demand – or will demand – major investment into the area. But I still think the central question is:
Is it worth an organisation's time to build up proprietary capabilities or is it more important for the organisation to continue developing its core services or products together with customers, partners and even competitors?
In other words, we should do what we do best, right?
Nowadays, you are able to use electronic identification (eID), like BankID or Mobilt BankID in Sweden, for numerous services, such as logging into government websites, signing contracts and authorising financial transactions. But along with these digital advantages – as is so often the case – come untold risks.
Instances of identity theft are rising sharply and, on an almost weekly basis, the media reports on new cases involving Swedes who have become victims of identity fraud. Your identity is then used, for example, to access or steal your money. Even if you solely use your eID for a specific service such as Swish (a Swedish electronic payment service), fraudsters who manage to access your eID are able to utilise it for a great deal more than just Swish.
A digital identity is based on something unique to you, such as a confidential digital key on a card, computer or mobile phone, including a PIN code or password to use the key. This is called two-factor authentication, i.e. two different components are required to authenticate (confirm) a user's identity. For example, Mobilt BankID requires users to have access to a specific mobile device and a password. Having only one of these is insufficient – which increases security.
The fidelity of BankID and Mobilt BankID requires that they also possess a number of additional security features, but the most important for you as a user is to protect your unique component and your password.
If your unique component is electronically stored on special hardware, such as an electronic identification card (BankID), you are required to keep the hardware in a safe place and select a secure PIN code.
If it is stored on a device such as computer or smartphone/tablet connected to the internet, it may be exposed to or at risk of attack. Furthermore, devices such as smartphones and tablets are liable to be lost or stolen, as you often carry them with you.
In the latest Combitech podcast episode, I discuss identity theft with my colleagues Susan Bergman and Johan Thulin. Listen to the episode on A-cast (in Swedish).
Digitalization means the world we are familiar with is changing. In just a few years, several major companies that are currently on the crest of a wave will have lost their top positions. Experts say seven of ten professions that our children will be pursuing do not yet exist. Meanwhile, it has been predicted that some 25 percent of companies’ digital traffic will be outside their Control.
On top of all this, a raft of new laws is in the pipeline, such as GDPR and NIS, imposing tougher requirements on how we protect our information and that of others.
In the midst of all this change, we need to take stock and ask questions: Is it reasonable to allow our interconnected society to grind to a halt because of inadequate security? Is it alright for the data of millions of users to be floating around on the internet because one company has been hacked? Should it be possible to disable power supplies or hijack driverless vehicles? The answer, of course, is no. The fact that legislation is now beginning to keep pace with developments is something we should all be pleased about. If we’re honest, this should have been the case a long time ago, right?
So how should we act in this changing world? I think it’s important not to be overwhelmed by all the statistics that are thrown at us on a daily basis, but instead, we should start experimenting! Take advantage of all the wonderful new technology out there and try it out to see if it can streamline processes and generate new business. But do your experimenting with security in mind. It’s no longer possible to wait and see which technology will become the standard. We have to try different paths and change the way we work en route.
Take Blockchain as an example. Who can say how this technology will be used five years from now? Is it only relevant to fintech? Is it going to revolutionize IoT with rapid, traceable microtransactions? Will Blockchain disappear to make way for smarter technology? No-one knows for sure. But it is possible to explore the possibilities in its current form. Maybe it’s the case that the 3rd or 4th generation of a new technology is the one that makes a breakthrough, but by that time you’ll already be on board.
Back to the topic of integrating security into digital developments. To give an example, how many people would buy a car that focused solely on function without protecting the driver and passengers? It’s exactly where we are now when it comes to cyber security. We need to make the transition from desirable features to essential features. Cyber security is an essential feature if the digitalization of our society is to be successful. We must have a society that is both interconnected and able to handle all new technologies. But also a secure society that we can rely on.
In our latest Combitech podcast, my colleagues Susan Bergman, Tina Lindgren and Johan Thulin discuss how prepared we are for digitalization. Listen here [Swedish only].
It can hardly have escaped anyone’s notice that the new General Data Protection Regulation, or GDPR, will enter into force next year. But with just 450 days to go, companies’ knowledge of GDPR is still shockingly poor.
GDPR enters into force on 25 May, 2018, and applies to all companies operating in the EU or handling the personal data of EU citizens. Despite this, several surveys reveal that companies have very limited knowledge of the legislation. In a survey by Dimension Research, pretty much all the companies that were approached (97 percent) had no finished plan in place for GDPR compliance. This is despite 90 percent of the companies admitting that their existing procedures and control systems were inadequate.
GDPR includes a vast number of new requirements that need to be considered. Meanwhile the introduction of painful financial sanctions for those that breach the law, means the work of implementing GDPR should be at the top of everyone’s agenda. And that’s just not the case at the moment.
Several experts believe implementing GDPR will involve more work than the effort that was required to manage the millennium bug. And you have to remember that we had several years to plan and prepare for the millennium bug. A lot more time than 450 days...
So what do we need to do to ensure we’re complying with GDPR when it eventually becomes law? Here are a few essential tips to help you achieve compliance:
- What personal data is collected and managed?
- How is that data collected? How is it used? How is it sent and stored?
- How, and using what method, can the data be shared?
- How well are you protecting the registered person’s rights?
- On what legal basis are you managing personal data in your organization?
- What security measures and protective mechanisms are in place to address identified risks?
How far has your company come in terms of implementing GDPR? What do you regard as being the main challenges?
Digitalization permeates every aspect of society and generates a wealth of opportunities for both companies and individuals. We are increasingly relying on digital functions and linking up more smart devices with the internet, such as house alarms, baby monitors and activity trackers.
And on top of that we now have solutions that enable us to control all linked-up devices in the home just by talking to them. Solutions such as Google Home and Amazon’s Alexa allow users to do things like switch lights on, get the latest traffic information, start playing their favourite music or adjust the temperature in the home – all using voice control.
There’s no doubt that digitalization brings convenience and simplifies many aspects of everyday life. But it’s important not to be blinded by all the opportunities offered by digitalization without reflecting on the increased vulnerability associated with linking the devices we use. A solution based on voice control registers everything that is being said in the home, so it can, for example, be a good idea to switch off the function when private conversations are taking place in the home, or if you’re working from home and receiving phone calls in which sensitive information is being discussed.
For a long time, cyber criminals were almost exclusively targeting companies or authorities rather than individuals. This was because an individual’s data was often not regarded as being interesting enough to sell on to a third party, unless it concerned bank or credit card details. But more recently cyber criminals have realized that an individual’s data is often valuable to the person in question. The person can therefore be blackmailed to pay ransomware, i.e. a hefty sum in order to have their personal data restored to them.
Increased digitalization in society has also meant the emergence of a raft of seamless payment and ID solutions. Most people have mobile banking ID on their mobiles, which can of course be used as identification when dealing with authorities, taking out bank loans, submitting tax declarations and shopping online. Naturally this service simplifies many processes, but it also means the consequences of an attack become more serious, and your mobile phone is like a database of hugely sensitive material.
In other words, increased digitalization in society brings both major benefits and new opportunities, but at a cost. In our latest Combitech podcast (Swedish only), my colleagues Susan Bergman and Elina Ramsell and I discuss how prepared we are for digitalization and its impact on society. Have a listen!
2016 has seen increased digitalization fuelling the DevOps shift in earnest, but at the same time the challenges are becoming increasingly clear, particularly for those of us who are security experts. How do we make sure we don’t get left behind in the rapid pace of developments?
2016 feels a bit like the year we began to see regarding serious shift towards DevOps – strategies aimed at bringing software development and IT operations closer together. I think Gartner’s prediction was right on the money when they said 25 percent of all global IT companies would implement this shift in 2016. And when you think about it, it’s not all that strange. If there’s anything that sparks people’s interest, it’s concepts that involve making products faster, or shortening the time to market.
Now that IT companies are finally starting to use DevOps strategies and even beginning to adapt to the culture that DevOps represents, the challenges are coming to the fore. One such challenge is how those of us working with cyber security should adapt our approach. What tools do we need to have to hand in order to actually keep up with the rapid pace of developments?
As an IT security specialist, I was initially sceptical about the whole DevOps movement and felt that upping the tempo would basically mean we’d be left behind. After all, security tends to be something people barely have time for as it is – what would it turn into? But over time I’ve begun to view it as a major opportunity to change the way we work with security and make it more effective.
Are you the kind of person who’s been saying for years that “security has to be in place right from the start”, but never really managed to get it to turn out that way? Then the shift to DevOps might just be your chance to shine. One of the benefits of DevOps is that the functional input of all stakeholders is taken on board at an earlier stage (including information security of course), to then be managed in an automated way. This guarantees predictable and short release cycles.
Unfortunately, this is not a way of working that many of us are used to. But I believe an integral approach in the development process is absolutely essential. In order to facilitate cooperation, I usually encourage everyone involved – with responsibility for security, development, administration, quality and testing – to use the same kinds of tools and processes as much as possible.
The shift to DevOps doesn’t just mean a seamless way of integrating in the development process. If security specialists also embrace the same tools and automate in the same way as others, we will not only be able to incorporate simple tasks into software but also introduce more controls during the development process.
It could involve including our code analysis tools and controls in the development process and getting securer codes as a result. Codes that ultimately reach the production environment. We can continually subject codes and systems to automated attacks during the development phase, allowing problems to be identified much earlier on in the process instead of at that final testing stage prior to commissioning.
At the end of the day, DevOps and the increased automation that comes with it enables me to focus on the bigger and more complex issues – and that’s where security experts can be most useful.
The way we work today is completely different from the way things were 20 years ago. Nowadays we expect to be able to read emails on our phones, participate in global video conferences while on our summer hols and order goods online. In a short space of time we’ve managed to streamline our way of working. The problem is that many companies are still using the same internal training methods that were popular back when we were listening to music on cassettes.
Four ways to create an effective information security training programme:
1.Analyse the current situation. Delivering training that fails to meet the needs of the target group is a waste of time. Employees need training that is tailored to their needs and their reality at work.
2. Plan. Not all employees have access to the same information and the consequences of incorrect behaviour vary depending on the employee’s position. A company whose work is risk-based usually categorizes its employees into three levels, depending on their access to information and level of influence at the company. The training initiatives are then adapted to the needs of each risk group.
3. Change behaviour. The purpose of training and information initiatives is to change employee behaviour. A common misconception is that more information is synonymous with improved security awareness. Changing people’s behaviour requires the use of educational tools, such as:
- Cyber security roadmaps, practical security tips that employees can consult when they come up against various risks, for example phishing.
- Dialogue maps, a workshop tool in which the team discusses the dilemmas and challenges relating to their security work in day-to-day operations.
- An educational intranet, where employees can easily search for information when they really need it, which can reduce training time.
4. Measure and optimize the effect. Modern research has developed methods for measuring actual changes in employee behaviour. Indicative scenario methods allow you to evaluate the effects of training and prioritize future initiatives.
Employees are often portrayed as the weakest link in security efforts, but they are also the key to an appropriate and dynamic level of security. The question is whether your organization is relying on outdated techniques to manage current and future challenges.
Listen to the Combitech podcast [Swedish only], in which my colleagues and I talk more about security awareness and social engineering
In my last post, about cyber threats, I talked about a number of security risks that it’s important to protect yourself against. As a follow-up, here are my five top intrusion prevention tips to protect your company/organization. The points can also help you decide what’s important for your private data.
1. Make sure everyone is aware of the threats. It only takes one employee who is taken off guard, clicks on the wrong link, reveals too much in a conversation or is careless about managing information and ends up giving access to unauthorized persons. A clear policy on how to manage information on the work computer, mobile and outside the company network offers good protection.
2. Make sure all IT equipment is kept updated with the latest and securest version of any software. Also make sure you have an up-to-date and accurate picture of what is on the network, primarily at the interface. And avoid default settings that are not secure, such as default passwords. This applies to work computers, but also to routers, printers, firewalls and suchlike. Your browser is a particularly vulnerable point, so make sure you protect it too. Work mobiles should also be updated and have the proper settings.
3. Make sure the organization is able to handle incidents – because they will happen. Continual technical monitoring of the network makes it easier to take action when something happens, and to detect and stop attacks as soon as possible. It’s also really important to have an effective incident management process in place, so employees know who to turn to and any incidents are dealt with appropriately.
4. Use a risk analysis to adapt the level of protection to the information in question. If the protection is not strong enough, the hacker will have no trouble accessing the information. But if it’s too strong, users may find the system too fiddly and start taking shortcuts instead.
5. Offer protection against eavesdropping. Organize robust encryption for all types of communication, including voice conversations. And make sure there are meeting rooms available for discussing sensitive information.
Of course these measures require a bit of work, but they’re well worth the effort in the long run.
For the more eager among you who want to get things moving quickly, I’ve got five relatively quick and specific tips to start with while you’re working on the above:
The standard hacker profile of a thrill-seeker who is drawn by the challenge of trying to get into closed systems no longer fits with reality. There’s a huge amount to gain these days from cyber criminality – and hackers can be anyone from bored teenagers, to criminal organizations… or entire nations that are directly or indirectly attacking other nations.
To understand the mindset of a hacker, you must first understand why they carry out their attacks. Often the reasons are obvious – the hacker is out to steal data or destroy a system. But sometimes it’s more complicated than that. For example, an attack against an organization may just be part of an overall strategy targeting another organization. Then there are the ideologically motivated hackers who are engaged in a kind of digital warfare aimed at those who don’t share their views.
Typically, hackers opt for the easiest route into a system – all it takes is for the attacker to detect a fault in the system or its connected environment. And if it’s a professional hacker we’re dealing with, they’ll most likely have both the time and the resources to find a way in.
Areas where the system receives data from outside, such as websites where users register, are the perfect weak spot for a hacker. The same applies to functions and services that are rarely used. Then there are, for example, alarms, heating and ventilation systems and other connected systems with user interfaces where security is low, because we’re barely aware of their existence.
So when you think about it, a hacker doesn’t really need to be all that clever when it comes to technology. Most of what you need to launch an attack can be found online. For example, Darknet, which is basically the internet’s black market.
Many hackers have homed in on social engineering, which is a method that exploits psychological factors in the people in an organization to get hold of the information required. Hacking people instead of computers at companies is often far simpler, because people like to be helpful.
But why would anyone want to hack you as an individual? Partly because it’s easy to make a quick buck if the attacker gets hold of your bank details. But ransomware is also quite common. This involves planting a programme in the victim’s computer, for example using a USB stick or by tricking the victim into clicking on a link in an email. The programme locks the computer and the attacker then demands a sum of money from the victim in order to unlock the computer.
Nowadays, DDoS attacks are also common, in which the attacker takes control of several computers, often standard home computers, and uses them to attack the real target, such as a website.
Another reason may be that they want to use your computer or mobile to attack your employer. Accessing an employee’s computer and then subsequently a company’s entire network is often much simpler than attacking the organization directly.
Feel free to add your thoughts on this subject in the comments field. Cyber threats can come in many different guises, which is partly why cyber security is so challenging and fascinating.
“First and foremost, we need to raise the level of awareness of the information our systems and networks contain. Security isn’t just about developing technical solutions. In a broader perspective it’s about identifying a company’s most vital information and deciding what needs protecting.”
This was Marcus Wallenberg’s take on the concept of cyber security during an interview in the latest issue of our customer magazine, Combined.
He believes the concept should be viewed in a broader risk context and identifies one of the management team’s most important jobs: risk management, in this case relating to IT systems and the information stored in them.
Risk management is about balancing business or operational benefit with security. And it requires management teams to have sufficient knowledge to impose the right requirements on the operative processes. There also needs to be a solid crisis management organization in place, i.e. making the right decisions when information systems come under attack.
It’s vital to understand the value of the information handled by the business and the consequences of such information being lost or corrupted. The consequences for contractors or a third party often also need to be considered. And it’s about knowing what is most critical and in need of protection – and when. Cyber security is a constantly evolving concept.
A management team also needs to understand potential threats and the various contexts in which threats arise.
Threats may manifest themselves in different ways, but the aim is often the same – extortion, industrial espionage, brand attacks, theft of intellectual property, sabotage and suchlike. Management teams need to be capable of assessing the consequences of a threat in various operating situations, using qualified input values from employees, to make well-informed business or operational decisions.
A management team should be able to answer key fundamental questions about cyber security:
The answer to the last question may sound simple: Yes or no. But answering it requires a solid understanding of all the other questions. It also requires the right information and decision-making documentation from the organization. And most importantly, a good level of security awareness.
Improving isn’t just about raising the level of security. It also means adapting it in response to existing threats and risks. It’s about being able to prioritize and take action at the right time. That’s why cyber security is clearly a matter for the management team.
Many people are aware of the security risks out there, but our surveys of companies and other organizations suggest that when it comes to cyber security, the level of knowledge is somewhat lacking. So here are a few questions for you to consider, to test your own level of security awareness.
1. You visit your bank’s website and the address field in Internet Explorer turns green. What does this mean?
a: The bank is certified according to the ISO 14001 environmental standard.
b: The page you are visiting has been confirmed as being virus-free by Microsoft.
c: A reliable organization has verified that the page definitely belongs to the bank.
2. There’s a new person at work. You haven’t met them, but they want to add you as a contact on LinkedIn. What do you do?
a: Accept straight away. The more friends I have, the more popular I am, right?
b: Only accept if they’re attractive. It could be the start of a new office fling.
c: Don’t accept until you know who they really are.
3. A person from your helpdesk calls you and wants to verify some information. What do you do?
a: Nice to able to help them for once!
b: You try to verify that it really is the helpdesk, for example by calling them back.
c: It’s such a pain when people disturb you by calling when you’re trying to work! You ask them to call back tomorrow.
4. You get an email from someone in your HR department that should have gone to one of your managers, in which the manager is asked to confirm the attached salaries. The email also contains a file called Salaries_2017.xls. What do you do?
a: Call the HR person and tell them you probably shouldn’t have received the file.
b: Check your antivirus software is up-to-date and open the file.
c: You need to be quick before anyone realizes you shouldn’t have got it and asks you to delete it, so you open the file immediately.
5. You’ve joined a new gym and you create an account to allow you to book training sessions. Which of the following is a good standard for creating a password?
a: The name of your personal trainer; you associate it with the gym, so you won’t forget it.
b: A long word from the dictionary with certain letters replaced by similar numbers.
c: A combination of lower and upper case letters, a couple of figures and other special characters.
6. While waiting for one of your friends to respond to your latest snap, you notice an update for the operating system on your mobile. What do you do?
a. Update as soon as possible.
b. Ignore it. You’ve just thought of a good update for your snapstory – and it feels important not to forget it.
c. Think: I probably should update it, but usually forget to.
7. You’re watching cute cat videos online and you suddenly get a popup telling you you’ve won a free TV from Media Markt. What do you do?
a: Thank your lucky stars, you really needed one of those curved TVs like Oli’s! You quickly click on the link.
b: You send the link to all your friends. Share the joy!
c: You report the website to Media Markt and Google’s safe-browser team at https://www.google.com/safebrowsing/report_phish/
8. That cute guy you met at Marie’s sends a message via Facebook including a link. He’s abbreviated the address, but probably because he couldn’t be bothered to write it all out on his phone.
a: You click on the link. After all he said he was a policeman, so he must be trustworthy, right?
b: Ask one of your friends to open it instead. You can’t be too careful!
c: You check where the address actually takes you by using an online tool before you decide whether to click on the link.
9. You’ve got a few minutes to spare before yoga so you go to a café for a coffee. While drinking your coffee you connect to their free WiFi and go onto Facebook, but you end up at www.facebook.yhm.com. What do you do?
a. You assume it’s one of those load balancers or whatever they’re called. You know you entered the right name, so you log in as usual.
b. You explain to the staff that there’s something wrong with their DNS thingy and ask them to restart their access point.
c. You disconnect from the network and let the staff know what happened.
10. In cyber security, what’s it called when you manipulate someone else to get them to do something they know is wrong?
a. Social engineering.
b. Soft attacks.
c. Friendly fire.
Feel free to add your thoughts on this subject in the comments field. Cyber threats can come in many different guises, which is partly why cyber security is so challenging and fascinating.
Here at Combitech we’ve taken on the challenge of sharing knowledge about #CyberSecurity and raising security awareness in Sweden.
For example, one in three companies or organizations in Sweden have not ensured that everyone is aware of their security policy.We want to change this.
Correct answers to the questions: 1:c 2:c 3:b 4:a 5:c 6:a 7:c 8:c 9:c 10:a