security

Cyber security: why AI will never outsmart a human – BCS


Katerina Tasiopoulou has already had a remarkable career within a relatively short space of time. Awarded a BCS Young Professional Award in 2018 during her time as an Incident Response Engineer for IBM, Katerina’s determination, knowledge and skill led to her becoming CEO of her own successful cybersecurity business. Here Katerina explains the challenges of cybersecurity in the world of AI, her career journey and her advice for the next generation of cybersecurity professionals.

Can you tell us about your background and career to date?

I actually wanted to be an aerospace engineer, but this all changed when learning some basic computer coding at school. One day in school assembly we were shown some computer code on the screen, and I put my hand up because I knew it was wrong. There was an error in the logical flow of the code and when I pointed it out the teacher was shocked. I became very interested in coding from that point. I went on to study computer science and then cybersecurity at university level before working in various positions for several companies, including IBM.

After this I founded my own company, Exelasis. My current work is focused on advanced penetration testing and ethical hacking, or ‘red teaming’. I initially worked on the incident response and intelligence side, or ‘blue teaming’, because being able to defend against real threat actors taught me a lot of skills that I could use on the red teaming side. It’s a very interesting, exciting and rewarding career.

Did you find it daunting to enter a career in which women are sadly still in the minority?

It has been very challenging at times. At university there were no other women in my class, and at the time it was difficult for me to understand why. In the working world, while things are beginning to get better, it still has not dramatically changed. I have to admit that, as a woman working in cyber, sometimes I have not been appreciated in a room. I don’t feel that I have been listened to or treated seriously at times.

But, it’s important to make the point that I don’t want to be appreciated because I’m a woman, I want to be appreciated because I am educated, skilled, a team player, a collaborator, because I have input and because I’m a cybersecurity professional. Equality doesn’t have to be because of gender, but rather equality should be regardless of gender. There is still a lot of work to do and it’s fantastic that organisations like the BCS are helping to change attitudes.

You won a BCS Young Professional Award in 2018. What did this mean to you?

The Young Professional Award has been absolutely invaluable in helping me with recognition, eminence, confidence and networking. It really isn’t about the award itself, as a physical thing, it’s about how you use it. Even today as a CEO of a company I still mention the award in my presentations to show that I was recognised for my efforts, and to show others the progression that is possible with determination and self-belief. I imagine I’ll still be referring to it 10 years from now. It has also really helped me build my network. I always view the award as being a first step on the road to where I am today, because as I left the stage that night I set myself a goal; to run my own business.

In your role, what are the main challenges as a result of the evolution of cyber threat?

Technology is evolving incredibly quickly, but cybersecurity is not evolving at the same rate. The biggest challenge, from my perspective, is not the fact that there are new threats, although this is of course a major concern, it’s that we haven’t even fully addressed the old threats yet. Some organisations — banks for example — are built on some very old foundations and principles, with various systems and layers added on top over time. Add AI into the mix and suddenly it equates to a whole new layer of complexity. Cybersecurity means defending your assets, but how do you do this as technology evolves and the threat parameters change? As the goal posts move it can be hard, as a business, to know whether you have reached your objectives in terms of cybersecurity compliance.

How do you begin to address such challenges?

The only way is to get the basics right, not just in terms of compliance but also in terms of technical assessment. It’s really important to take a step back. As a business, your objective is to protect your assets, but the only way to really be sure that you’ve done that is to try to hack your defences. So penetration testing or ethical hacking is the ultimate test. Starting with a basic penetration test allows you to identify current gaps and close the critical ones.

Only by carrying out these tests continuously can you really see your exposure. In cybersecurity there are many different elements and it changes daily. Secure coding or secure programming is not the same now as it was 10 years ago and even the biggest organisations need to release new patches all the time because new issues are continually arising. There is so much change, but penetration testing is the one solid approach, in my opinion, that consistently evaluates defences from a technical perspective.

What about advancements in AI in relation to hacking— what are the implications here?

AI is going to be the new battlefield. At the moment it’s people versus people. In cyber, adaption and evolution always comes from one side or the other. Attackers are using AI so now we have to start using AI to continue to defend our assets. And we’ll need to adapt and widen the parameters of attack and defence whenever the next advancement comes along, which will probably be quantum. AI is a relatively new tool and we’re still learning. In my line of work it is bound to prove very useful for threat detection, incident response, and the correlation and analysis of data.

Breaches are based on knowledge and they are all related to data, and AI can help us understand data and look for patterns in hacker activity much more readily. On the flip side, AI use in cyber warfare poses many questions. I don’t know how we’re starting off and it’s difficult to understand who has the advantage here.

Could AI be a helpful tool for penetration testing and red teaming tasks?

AI is not just a button you press and it carries out an activity, you’re actually constantly feeding it information. If AI starts to be used for penetration testing, collecting threat data and learning advanced hacking techniques then it’s going to be a very dangerous weapon if it falls into the wrong hands. I think a moral code around what we teach AI, and the ethics around its use, is absolutely critical, because once we start teaching it we can’t take that back.

If we train AI to become the most powerful hacking weapon in the world, then what happens? We need moral and ethical codes, but who is going to enforce compliance? Many businesses are still struggling to comply with GDPR correctly, and the intricacies of AI and how to govern its use creates a whole different problem that certainly requires some serious thought in the very near future.

What are your thoughts on the idea of AI replacing human roles within cybersecurity?

I personally see AI as a tool to enhance, not replace. I read an article recently which mentioned that AI could replace some of the more entry level roles in IT and security, like a SOC Analyst, for example. This is actually one of the crucial developmental roles that I always recommend for those looking to get into a career in security. If this role doesn’t exist in the future, then what does that mean for those wishing to enter the industry? You have to start with the basics because no one can automatically become an expert overnight.





READ SOURCE

This website uses cookies. By continuing to use this site, you accept our use of cookies.