security

“We don’t all need to know exactly how AI works, but we need to … – CTech


John Chris Inglis still keeps the slide rule he got more than 50 years ago. Sometimes in lectures, he presents the slide rule as his “first computer” and feels a general sadness towards it, as if he holds the slide rule with unnecessary sentimentality and is not ready to give up the past.

But Inglis, who was the first national cyber director in the White House, is not really sentimental. He is simply trying to convey a message – everything passes quickly, and obsolescence is relative. “The technological gap created between the side rule and today will happen again,” he says in an interview with Calcalist. “It will happen in two years, then in a year, and then in six months. We can deal with it; we just have to always make sure that we remember what stands at the center — the person.”

1 View gallery

ג'ון כריס אינגלסג'ון כריס אינגלס

John Chris Inglis.

(Photo: Yuval Chen)

“The person at the center” may sound like a well-worn slogan, but, in fact, the opposite is true. Today, technology is at the center and usually leads the agenda, and it is seen either as a solution to all problems or as the mother of all problems.

Inglis, 68, suggests rethinking this and drafting a “new social contract” between those who develop the technology and us, the consumers, residents, and citizens. But the word “new” is misleading because this contract is actually based on past expectations.

“I may not know what’s going on under the hood of my car or why the engine can get me to 100 miles an hour, but I’m in control of it. I’ve got a gas pedal, I’ve got a steering wheel, and I’ve got brakes. We have to make sure we have the same relationship with artificial intelligence. Not each and every one of us needs to know exactly how it works, but we need to know that we control it. We also need to make sure that we improve our critical thinking skills so that we understand what technology offers us and what it offers us in the context of the world we live in. I think if we do these two things, we will be able to deal with artificial intelligence, with quantum computing, and with everything that comes after.”

Before being appointed by President Joe Biden to become National Cyber Director, Inglis served as the deputy director of the NSA, an organization where he spent the bulk of his career. His first days at the agency, which also paralleled the first days of the digital revolution, shaped his entire perception regarding security, cyber, but mainly regarding the limits of his work.

“As a computer scientist, I believed in the power of technology, that you can write any type of code or build any system with a high level of security, but at the end of the day, the reality is that these systems can never be perfectly secure. Because it’s not just the code you write, but also the interaction it has with foundations like the internet. It’s rapidly becoming so complicated that it’s almost impossible to create a closed system that doesn’t depend on anything. Even a car today has 50 million lines of computer code. It’s just so complicated, and there are always some loopholes that can be exploited.”

This is not an acknowledgment of defeat, he emphasizes to me, but simply reality. “We have to recognize that the internet contains valuable things, not only for us: criminals have discovered an attractive space there, nation-states have discovered an attractive space there. No wonder – the price of entry is very low, and the value that can be accumulated is very high,” said Inglis, who visited Israel to take part in Team8’s CISO conference.

Close to his retirement as National Cyber Director in February of this year, multiple reports in the American press stated that professional friction with senior managers in overlapping positions contributed to his decision to leave. However, these reports were vigorously denied. He explains, “The intention was not to create another type of authority that would try to deal with and exert force, but to work to connect all the parts that already existed. To take the agencies, departments, and the vast federal bureaucracy, to mobilize all the parts that would be coherent and focused, and to bring about better relations between the government and the private sector.”

Inglis took the job after the U.S. Senate unanimously confirmed his appointment and immediately introduced a more modern approach to threats. Among other things, his office formulated a national strategy for the issue, which was published shortly after he left.

Today, he holds a number of positions in the private sector, but he still speaks of the document as his own. He says, “We’ve stated that we have to reallocate responsibility in cyberspace so that we can create the kind of resilience that we’ve come to expect in other systems like transportation or medicine. A car is built to be inherently safe as the kind of thing that can get you from point A to point B with a high level of certainty that you will reach the end of the trip safely. But if you drink, text, or cross the dividing line while you’re driving – everything goes wrong. I mean the driver has to actually participate in maintaining this safety. This is true in cyberspace: whoever believes that we can make a computer or a network safe, then to stand with folded arms and just watch it defend itself, is forgetting the lessons from so many other complex systems.”

Nevertheless, the comparison to the car seems a little old-fashioned, especially in a world where technological developments are deployed very quickly, and the regulators don’t seem quite able to keep up.

“I agree that cyberspace is kind of complicated, but I think we can learn lessons about how we bend technology to our ends. A car might not be a perfect example, but it’s a good one because often when we talk about car safety, we focus on the device—the car, the airbag, or the seat belt. But car safety is the entire system within which it operates. Large manufacturers are responsible for investing in engineering to make it inherently safe, but the story is not over here. Someone needs to understand what roads look like, how they are designed to be as safe as possible. We have no jagged edges sticking out on the roads because we thought about what these roads should do, the tracks are of a standard width, and the bridges are of a standard height.

“There is also management to go along with this system: there are police officers who patrol the highways to make sure that people are not speeding or drinking and driving. The last part is the person who operates the car and needs to know what their role is. All this very complicated arrangement of technology, roles, responsibilities, and people brought us to a place where we have an expectation that the transportation system will work for us. We haven’t done that regarding cyber systems.”

Inglis offers an interesting perspective. Although he served for many years in an intelligence agency that tends to see a threat everywhere, he repeats over and over the importance of education and free and critical thinking. Of course, in between, he pulls out detailed descriptions of threats from Russia, the spread of fake news, the campaign in Ukraine – but he is immediately drawn back to the ideas of digital literacy.

“We published a national cybersecurity strategy, but there is also an accompanying document that will be published soon, and it is a national cyber education strategy, that is, how do we actually address the skills and educational needs of our citizens: Can they make optimal use of the internet? And this is not only related to people who work in IT but how do we make sure that every citizen knows enough about cyberspace to cross the digital street safely. After all, we teach our children to cross busy streets and be careful with hot stoves, and we also have to teach them how to use modern technology in ways that may not be obvious.

“Most people today are not ‘digital kids,’ meaning they don’t have that kind of detailed intuitive understanding of how technology works. They are app kids; they know how to download an app, how to make it work, but they don’t know why they should worry about their security and safety: Who protects the data they store in the cloud, what protects the things they upload to the social network, what happens when they click on a link where someone else’s code runs with your permissions? We need to make sure they understand this so that they not only think critically but also become skilled users of the internet and everything that comes from that.”

But for now, there is no regulation. Do you think we will see it coming from the U.S.?

“Yes. I think we’ve already seen some of that in the National Cybersecurity Strategy. One of the main things in it is reassigning responsibility to organizations. So if you’re a large software vendor or a cloud provider, there will be an expectation that you make reasonable investments in security, like we hold car manufacturers accountable. It shifts responsibility from users and operators—on which most of the focus today is directed towards entrepreneurs. In addition, we want to encourage resilience from the product design stage. Market forces will take care of some of this, but they alone almost never provide full security in cases that are critical to life or safety—so governments should do so. But we need to make sure that we allow the market forces to work first, that we use the lightest touch possible, and with a high level of consultation with the industries.”

What about companies like NSO?

“NSO is an interesting case. From what I have seen, most countries express opposition and are concerned not with the mere existence of the technology but with the way it is used. Imagine another world where this capability has been developed and at the same time, they would look for weak points in the systems so that they can be fixed. We just need to ensure the use of the technologies for good purposes.”

But it’s not just the player. You wouldn’t want everyone to make atomic weapons, even though clean energy can also be produced from the technology.

“It would not be completely unreasonable to say that I am going to ban something—if, for example, I know the type of tool, am concerned about how it has been used in the past, and do not know how to control it in the present.

“Controlling the free flow of this technology to other places is equivalent to the EU’s ban on the import of NSO products. But the concern at the end of the day is how the technology is used, not its existence. Because that technology can be, and often is, used for good purposes, right? There are those who try every day to break into systems, but as ‘white hackers,’ meaning they do it for the benefit of the defenders. We’re not denying it. We’re not saying it’s illegal and shouldn’t be done. So I think the context is important, and we need to make sure that our value system is manifested. Controlling the technology alone will miss the point that people use it in different ways.”

Inglis’ high level of criticism also knows its limits. These days mark a decade since the revelations of Edward Snowden, an NSA subcontractor who leaked to the media a huge amount of documents that revealed how the American government operated a surveillance program to systematically collect information on millions of people around the world, including American citizens and foreigners. When I ask Inglis if the long time that has passed allows lessons to be learned about the limits of power, it is evident from his answer that time has stood still.

“We have to distinguish between his allegations and his revelations. Many of the things he claimed did not happen. Some of the things he claimed did happen were fully and fairly investigated, and we have redoubled the controls necessary to ensure that the state is using force in accordance with the law. The program he was most concerned about, of collecting telephone metadata, continued several years later but under full legal supervision. What was ultimately determined by the committee that investigated the event is that the NSA did not abuse its authority and did not abuse this data.”

Inglis added that “I would say that what I learned from him, personally, is that if you do something that can be disclosed or explained, you have to tell the stakeholders first. These are the controls that are in place to make sure that it’s done for this purpose and not for another. So that it can be kind of a free and fair discussion, as opposed to the kind of unholy urgency that turns it into the darkest story possible. There has been quite a lot of confusion and chaos in the years since the Snowden revelations, again due to the combination of accusations and revelations. However, I think what we have learned from this is that a free and open society depends on the citizens understanding what is being done in their name, by whom, and how.”



READ SOURCE

Readers Also Like:  Twitter Condemned by Senators for Serious Security Issues - Tech.co

This website uses cookies. By continuing to use this site, you accept our use of cookies.