Exclusive: SailPoint CEO on why bot identities need to be scrutinised
Companies are increasingly using bots to automate menial and repetitive tasks, freeing up employees to focus on more business-critical job roles.
In order to do this, bots are accessing data, making decisions on that data and then performing actions – but this access isn’t being scrutinised as it would be when staff are given access.
SecurityBrief spoke to SailPoint CEO Mark McClain about bot identity governance and what companies need to be aware of.
What are some of the most common enterprise use cases for bots at the moment and what permissions do they gain in these use cases?
Bots are enjoying a wave of popularity within today’s enterprises.
Our customers are using bots to automate repetitive manual business and IT processes, allowing IT teams to offload monotonous, time-consuming tasks and focus on other priorities.
Some of the current use cases we’re seeing are things like virtual assistants, customer service chatbots, order fulfillment, and travel booking for employees, meaning these bots have permissions that give them access to a variety sensitive customer, employee and enterprise data.
Why are these use cases being outsourced to bots?
These use cases are being outsourced to bots to save both time and money for enterprises that are working to drive business efficiencies and to keep pace with digital transformation.
For example, an enterprise can outsource order fulfilment to a bot that can work faster than a human, with potentially less human error.
Enterprises that have tens of thousands of customers can save a lot of time and money by automating these menial tasks.
And that’s not a bad thing as long as these bots and their access are being appropriately governed and secured.
What are the disadvantages of using bots in these use cases?
Introducing bots to the enterprise introduces a lot of opportunities, but it also creates new areas of exposure.
These bots have access to mission-critical systems, applications and data, just like any other user within the organisation.
Because of this, governing their access in the same way an enterprise governs human users is imperative to remain secure.
From a security standpoint, a hacker could easily spoof a bot and as a result, access all of the applications and data that the bot has access to.
Even worse, if left unmonitored, hackers could also start requesting access to other critical applications within the organisation, creating even greater exposure.
What can organisations do to mitigate the risks they present?
Bots are acting just like their human counterparts in today’s enterprises, and they must be treated as such from a security perspective.
Enterprises need to tightly govern the access that bots have to their systems, applications and data.
Often, this will mean treating bots in the same way as contractor-based identities with policies that grant access to only the applications and data they need.
It’s also important to realise that bots have a lifecycle in the enterprise, just like an employee or a contractor whose role may change or evolve as they move around the company (or depart).
As such, a bot’s lifecycle needs to be regularly reviewed, updated and ultimately decommissioned if the bot is no longer serving its purpose.
This is the only way enterprises can continue to answer the important questions of who has access to what, who should have access, and what they’re doing with that access.
Should bots be governed by the same access and identity protocols as human employees?
Absolutely. A single identity platform should govern all identities – both human and non-human.
As the enterprise IT landscape continues to change, identity governance must also keep pace with these changes, particularly when it comes to expanding the definition of identities beyond employees, contractors and partners.
With a comprehensive approach to identity governance, enterprises can govern all users (both human and non-human), all applications and all data.
This is the secure path forward for enterprises today.