Disruption has been a fearful term that is used more and more often when talking about the transformative force of Generative artificial intelligence (GenAI) in Professional Service Firms (PSFs), under which Consultancy firms. Why fearful? The popularity of natural language processing (NPL) technologies – which is a type of Generative AI – is growing increasingly. It has the astonishing ability to generate novel content (images, text, etc.) to answer complicated questions and give solutions with such a speed that the human mind cannot match. GenAI systems make use of models that have been trained on extensive datasets to comprehend the patterns and relationships found in the data. This enables the tool to only produce solutions and answers that are similar to the training data. As a result of this fearful disruption, consultancy firms are increasingly adopting this state-of-the-art technology to assist them in their daily work. Certainly, this will have an impact on the status quo of consultancy firms and their corporate hierarchy. However, is this new technology so advanced that it will replace the trust in human expertise completely? Today, public and in-house language tools are progressively used in consultancy, but not all ethical and practical implications are yet overcome. To analyze this in more detail, a Senior Consultant operative in the Netherlands was asked to share her insights and point out what exactly is fearful using the two types of GenAI.
Public/Open Language Tools
Changing the status quo, is there a transformative paradigm?
The use of publicly owned and openly used language models such as ChatGPT, Bard, Quillbot or Agros Multilingual are growing for consultants’ daily tasks, such as paraphrasing, enhancing sentence structure, translations and the writing of introductions. Indeed, these technologies greatly help consultants do the same amount of work, in a shorter amount of time. You need less and less people to complete the tasks: The status quo of consulting is bound to change. The interview with a senior consultant confirmed this transformative paradigm: “Consultants will likely lose their jobs; as a result, layers of the corporate hierarchy will get thinner, but they won’t disappear.” Your level in the corporate hierarchy is a representation of your responsibility, based on your expertise and build-up trust, in the firm and by your clients.
Consultants will likely lose their jobs; as a result, layers of the corporate hierarchy will get thinner, but they won’t disappear.
The operational gap
Clearly, an operational gap should be identified between the rather simple and efficient tasks that GenAI is allowed to perform, and the tasks for which the human eye remains essential. Ethical considerations are the reason that this gap still exists. The use of GenAI remains an ethical uncertainty, as its trustworthiness cannot be determined or guaranteed. This is the reason that the public GenAI operates with a confidentiality risk. At the firm of an interviewed expert: “Open language tools, such as ChatGPT and Google Translate are not allowed to be used. Because it can give too much client information, even sharing the smallest bits, is not allowed.”
Open language tools, such as ChatGPT and Google Translate are not allowed to be used. Because it can give too much client information, even sharing the smallest bits, is not allowed.
All data you enter, you are giving consent to become available training data, on which other answers and possible solutions generated by that tool will be based. In addition, opacity is the lack of transparency that AI algorithms are able to provide, regarding generating operations and reasoning to their users. This contradicts directly a core responsibility of the consultant, being able to explain your sources and information at all stages of their consulting research. A connecting hazard to this is the mixture of factual and non-factual data that makes the credibility of information generated through language tools questionable. Evidently, the use of open GenAI is still quite limited in presumably taking over all tasks of a consultant. The human eye is required to be the controlling factor ensuring the trustworthiness and accountability of the firm towards clients.
Private/in-house Language Tools
In-house language tools:
“Bullshit in, is bullshit out…” as was quoted by the interviewed expert. She reflected upon the tendency of PSFs to make large investments in GenAI in-house tools. For instance, PwC has invested 1 billion dollars in their own (pilot) ChatGPT and EY 1.4 billion dollars in an AI platform and announced future spending in their language model tools. Manifestly, creating a more contained, secured, transparent, and controllable GenAI tool that can solve most security and confidentiality issues, while generating solutions on valid data. As confirmed by the interviewed expert, this disruption would hypothetically have a great impact on the transformative paradigm as it could close the operational gap. Noteworthy, the ability to have this great impact will be limited to the individual large (international) PSFs that have abundant sets of data and the funds to create, manage and improve these language models continuously. But again, not without (ethical) risks.
A limited investment
All language models will be as big or limited as their dataset itself is. This means that the quality of your in-house generated solutions will be based and trained on the companies’ data only. Hence, the idea of bullshit in is bullshit out. To add, external factors and awareness are excluded to keep the data contained, limiting the ability to research or find undiscovered information, a quality for which a public language tool like ChatGPT is explicitly valued. Most disappointingly, it will depend on client confidentiality agreements whether their data can be shared (anonymously) with the in-house tool. “With current strict security and confidentiality agreements, sometimes team members are not allowed to view all data available” let alone all employees of a large (international) consultancy firm. Despite the promising efforts by large PSFs, overcoming a predominant confidentiality problem with an in-house trustworthy GenAI tool is not yet guaranteed.
With current strict security and confidentiality agreements, sometimes team members are not allowed to view all data available
In conclusion, the application of Generative Artificial Intelligence (GenAI) in consulting raises both the prospect of a paradigm shift and ethical issues. Public language tools improve productivity and effectiveness but pose concerns about confidentiality and transparency. Large corporations are investing a lot of money in in-house technologies to address the operational gap and offer solutions that are transparently and securely regulated. The phrase “bullshit in, bullshit out” perfectly captures the problem of only using data that is internally confined. In-house GenAI technologies have a lot of work to do before they can be fully trusted and accounted for, despite their potential. To maintain its credibility while negotiating the changing context of human-machine collaboration, the consultant industry must still strike a balance between productivity and ethical considerations, to ensure its trustworthiness.
Acknowledgement Statement: This blog is part of the student competition in Management Consulting Master Program at the School of Business and Economics.