security

Companies Need to Prove They Can Be Trusted with Technology – HBR.org Daily


Technological solutionism — the idea that tech developers can be trusted to innovate or code our way out of problems, and into prosperity — is on its way out. Individuals’ trust in tech and the companies who develop it has been eroded by innumerable failures. For businesses, it’s no longer acceptable to use technology without taking steps to ensure it’s trustworthy.

Regular people — and, increasingly, legislators and regulators — have reasonably been made skeptical by recent transgressions against individuals’ privacy, calcification of corporate or individual biases into life-altering algorithms, constant threats that new tech erodes their ability to make a living, and beta testing unsecured or faulty connected devices or vehicles on an unsuspecting populace. Looking at the near future, with generative AI and virtual worlds being implemented faster than we can track, the trust problem is only getting worse.

More tech isn’t going to solve this problem. Customers are looking at the choices that company leaders make and assessing whether those choices match individual consumers and citizens’ values — and whether they put people first. And because every company uses some form of digital technology in its operations and relationships with customers and partners, every company needs to worry about digital trust. “Digital trust programs are a necessity, and when done right, can be both a differentiator and enabler for a business. As demands rise for companies to show they are trustworthy, organizations should strive to build a digital trust strategy that meets or exceeds their customers’ expectations, not just what’s legally required,” according to Vikram Rao, Salesforce’s Chief Trust Officer and a member of the World Economic Forum Digital Trust Steering Committee.

Some companies may look at the problem and hope they can buy or hire their way out of it. In some cases, a new role, like a Chief Trust Officer or Chief Digital Trust Officer may help. But before CEOs and boards rush into creating a c-suite role as a panacea, they need to take a hard look at what exactly trustworthy technology looks like in the 21st century, how companies earn digital trust, and what changes and investments they need to make to do that.

What makes technology trustworthy?

No technology is inherently trustworthy or untrustworthy. After all, technologies have no agency in themselves; instead, as a character in the Afrofuturist film Neptune Frost puts it, “technology is only a reflection of us.” Trust in technology is a reflection of the decisions people make when they develop the tech, use it, and implement it. While there are many ways to think about trust, when the World Economic Forum convened business, government, and civil society leaders in the field of trustworthy tech, the assembled group reached this consensus: “Digital trust is individuals’ expectation that digital technologies and services — and the organizations providing them — will protect all stakeholders’ interests and uphold societal expectations and values.”

Readers Also Like:  Whistleblower Tells Congress the US Is Concealing 'Multi-Decade ... - Slashdot

In this work, the Forum’s Digital Trust Community realized that the most important question to ask with regard to technology is not “how do you get people to trust technology?” Rather it’s “what do we need to do — as technology developers, owners, and users — to respect people’s values and expectations?” With the right mindset and the right goals, it’s possible to develop an effective and trustworthy strategy for the use of digital technology. At its heart, such a trustworthy tech strategy comes from consideration for the individuals subjected to new technologies (regardless of the technology in question).

With respect for individual rights and expectations in mind, a trustworthy digital technology strategy should identify the right goals. The World Economic Forum’s digital trust community offered three broad sets of goals:

  • Security and Reliability
  • Accountability and Oversight, and
  • Inclusive, Ethical, and Responsible Use

As Julie Brill, Microsoft’s Chief Privacy Officer and a member of the Forum’s Digital Trust Steering Committee said at the time, “the goals of any developer of AI or other technologies should mirror the goals of the organizations and individuals who use or encounter that technology. Shared goals of inclusivity and responsible use, strong privacy and security protections, and effective oversight are the foundational elements to building a trustworthy relationship between developers, their customers, and individuals who use the technology.” Users and consumers can better rely on technologies that protect them, that include their needs and match their values, and that have good governance practices baked in, to remediate any inadvertent harms.

How do you invest in digital trust?

Earning digital trust isn’t simple, and it can’t be automated. Rather it requires a series of judgment calls, investments, and organizational changes that reflect the totality of a technology’s likely impact on individuals. These shifts, encompassing the whole business, can only be orchestrated by company leaders. As Keith Enright, Chief Privacy Officer at Google and Digital Trust Steering Committee member, said, “decisions companies are making now about preparedness determine where they will be down the line in terms of trust.”

Becoming prepared to start earning back trust requires CEOs and Boards of Directors to do at least three things:

  • Define a vision for digital trust
  • Plan to act in more trustworthy ways
  • Recruit people who will help earn trust
Readers Also Like:  UK tech firm offers security software for free in response to Martyn’s Law - Prolific North

The first investment leaders need to make is to take the time and energy to truly understand the impact of the technologies they create or deploy. Developing a realistic and comprehensive vision for how an organization uses technology in service of its goals and those of the people who rely on it determines how an organization will invest in digital trust.

Leaders should approach decisions involving technology with a clear sense of how those technologies reflect and impact core organizational values and the values of the society in which the business operates. Their vision for creating or adopting new technologies must consider both the benefits to the business and society alongside a clear and responsible assessment of the potential harms new technologies might impose on potential customers or other stakeholders.

For example, a business adopting generative AI must assess how the technology might increase efficiencies (a benefit to the business), unlock new solutions to pressing challenges (a benefit to society) while also developing a plan to prevent expected harms, for example, to democratic processes from AI “hallucinations” or misinformation and to workers displaced by these advanced technologies. Only a vision that responsibly considers both the benefits and the risk of new technology can truly be considered trustworthy.

After deciding on a vision that includes trustworthy goals, organizations need a plan to act for digital trust. This plan requires leaders to invest in the internal structures and teams that best support digital trust. For all applications of digital technology, the Forum’s work identified eight dimensions where positive action can help achieve an organization’s goals — both trust goals and overall financial or strategic goals. Those dimensions are cybersecurity, safety, transparency, interoperability, auditability, redressability, fairness and privacy. (As with the Digital Trust Goals, these dimensions are defined and explored in the Forum’s Insight Report, Earning Digital Trust: Decision-making for Trustworthy Technologies.)

In most organizations, these areas likely fall to different departments and leaders, which means optimizing them requires an interdisciplinary approach — and possibly some structural change — in order to support the careful decision-making required to earn trust from users and other stakeholders. However, the realities of a given industry or company may mean it’s not possible to maximize every one of the dimensions described. For example, greater transparency may not be possible for companies who work in highly-sensitive industries (e.g., national security), so their accountability and security would have to supplement. Similarly, some dimensions of trustworthiness (perhaps most noticeably privacy) vary between jurisdictions and geographies. People’s expectations and values are the litmus test for what makes technology trustworthy. Trust, itself, may be highly region and industry specific.

Readers Also Like:  SIOS LifeKeeper For Windows Verified With Milestone XProtect ... - SecurityInformed

The third big initial investment is in the people who can help the business earn digital trust. Making technology trustworthy is a multi-faceted effort conducted on an organizational (or societal) scale — which means that ultimate responsibility for trustworthiness falls to the CEO. This general duty of trustworthy technology decision-making doesn’t mean, however, that the business shouldn’t invest in recruiting experts (maybe even on the c-suite), who can help establish the right strategy, and the right structures, to support digital trust.

This leader, possibly a Chief Trust Officer — but just as likely to be the Chief Privacy Officer, General Counsel, CISO, or another executive — must be both an expert in stakeholder management and a trusted counselor to the CEO and the board. Stakeholder management comes into play where companies already have cybersecurity, privacy, compliance, and even ethics functions. The leads for each of those roles has a stake in the overall trustworthiness of the digital strategy of a company — in both building it and benefiting from it. In order to be a counselor on digital trust, this individual must also be able to think systemically — bridging between these disciplines in order to help leaders make the best judgement call on how, where, and toward what ends new technologies should be applied.

In order to be most effective, an officer responsible for digital trust can’t be an expert on every technology and every facet of trust, rather they have to be able to understand the strategy for the company and the values of affected individuals, workers, and customers. They support development of a trustworthy vision by conveying a system-wide picture of what the technology landscape looks like to CEOs and boards and helping build a strategy that supports trust rather than erodes it. They also help oversee the myriad and interdepending actions that, taken together, earn trust from users, partners, and other stakeholders.

CEOs must understand that trust in the technology that underlies and enables virtually every business can’t be bought, but by investing in the time it takes to make the right decisions, establish a trustworthy vision, and recruit effective supporters, it can be earned.



READ SOURCE

This website uses cookies. By continuing to use this site, you accept our use of cookies.