The roots of the symbolic are key for business and management

What if symbols are rooted in evolution? In this article, Nelson Phillips and Christine Moser argue that looking at the symbolic origin of humanity can help us better understand management and organization on an intrinsic level—like understanding DNA is essential to modern-day science.

Authors

Nelson Phillips
University of California Santa Barbara
Jamie Hu
University of California Santa Barbara
Christine Moser
Vrije Universiteit Amsterdam

download the full study

Phillips, N., & Moser, C. (2024). The biological basis of the symbolic: Exploring the implications of the co‐evolution of language, cognition and sociality for management studies. Journal of Management Studies, 10.111/joms.13037

18 September 2024

It might seem strange to study the symbolic outside of the humanities, like anthropology or linguistics. However, if we consider symbolism to be biological, it makes sense that a foundational part of what makes us human also contributes to how we bond and work. According to research from Christine Moser (SBE, Management & Organization) and her co-author Nelson Phillips (Professor, Technology Management at UCSB), looking at symbolism as a cornerstone of human evolution broadens our understanding of how we communicate, produce and store knowledge, solve problems, and more. Looking at the symbolic origin of humanity can help us better understand management and organization on an intrinsic level—like understanding DNA is essential to modern-day science. 

Human evolution begins with the symbolic

Phillips and Moser’s research starts with the idea that human evolution begins with the symbolic, not the other way around. Instead of the symbolic serving as a tool from which to view and analyze societal development as a separate entity, Phillips and Moser claim that the symbolic is a key part of what it is to be homo sapiens. A symbol, by this research’s definition, is something that represents another object or idea outside of itself that uses intentionality, representation, and generality to do so. The ability to create and interpret symbols, evolved in the very nature of our species, gives us three abilities that separate us from animals and earlier evolutions of humanity: language, sociality, and cognition. According to Phillips and Moser, these aspects of human behavior can all be linked back to how we manage and organize, making an evolutionary perspective key to understanding these processes on a foundational level. 

The symbolic is a key part of what it is to be homo sapiens.

The next portion of this research tackles, as mentioned earlier, how a symbolic perspective can affect the way that researchers actively approach management and organization. Phillips and Moser contend that, like animals who take up ecological niches in the wild, humans build niches that include vocabulary and social know-how, as well as physical inventions such as computers and highways. Cognition, language, and sociality help us create and inhabit this “cognitive niche,” which allows us to plan for the future, remember the past, and work in large groups. Through this perspective, humans are biologically wired for social construction, in the same way that animals are wired to fit certain habitats. This is important for management and organization because it fills in a gap in our prior understanding—the origin—of the way that humans manage and plan. In the past, research has focused on how managers use culture as an instrument or mechanism of control instead of looking at it as the result of something innate to mankind. While managers can have an effect on culture, this perspective provides more context for their activities and results. 

Humans are biologically wired for social construction, in the same way that animals are wired to fit certain habitats.

Symbolic machines change our cognition

Finally, Phillips and Moser raise questions about how recent technological advances will factor into this cultural conversation. If the symbolic is to be considered a fundamental part of human biology, then what does that say about symbolic machines and their growing role in human spaces? The fact that technology like AI is still growing, and that it can already produce written texts and imagery in a similar way to humans, highlights the impact that systems like it can have on the cognitive niche. As stated in this MIT Technology Review article, it’s predicted that as AI becomes more established in the workplace, it will drive the human workforce to focus on more complex tasks. If symbolic machines are capable of driving change to the cognitive niche in this way, humans may also not be aware of how that niche is changing. Having this perspective would allow researchers to catch and track these changes with much more precision and forethought.  

If the symbolic is to be considered a fundamental part of human biology, then what does that say about symbolic machines and their growing role in human spaces?

In conclusion, symbolic evolution shifts the very perspective from which we look at the construction of society. Concepts that seem limited to humanity can be drawn back to our singular ability to abstract ideas. If evolutionary theory, focused on symbolism as its keystone, can be accepted as a foundational part of management and organization, this could be used as a stepping stone for future research. 

What can we change about our approach to management, organization, and future research?

This paper questions whether the study of management and organization is behind, considering that other related areas, like psychology, have long adapted to and incorporated evolutionary theory. With these questions in mind, managers and employees can better approach their behaviors as a product of innate biology, instead of focusing on actions in isolation. 

Here are some questions we can ask: 

For managers: How can we rebuild our management processes based on what we know about human evolution—our origins? How can we work to ensure that management is less reactionary in the short-term, and takes longer-term biological contexts into account? 

For employees: What information can we provide to our managers that better inform their judgments based on contexts beyond the situation at hand? How can we be more adaptable and understanding to communication errors and mistakes as a result of broader symbolic challenges or evolutionary barriers? 

For new technology or tools: If symbolic machines—like AI—are starting to take over the creation of new symbols, previously something only humans could do, what does this mean for more antiquated processes? How will we create new methods of organization that cater and adapt to these developing tools? 

For broader research: What patterns can we draw from viewing human social behavior from an evolutionary perspective, and what seemingly isolated actions can we better understand? Are there old works that could benefit from a new look through this perspective? 

Ultimately, shifting perspectives beget change. It’s important to make sure, as we uncover new ways to consider or reference old concepts, that we’re able to apply these to the past and present, as well as remain open to how they could change our future. 

Authors

Nelson Phillips
University of California Santa Barbara

Nelson Phillips is the Christian A. Felipe Professor, Vice Chair for Academic Affairs, Ph.D. Faculty Advisor, and Distinguished Professor, Technology Management in the College of Engineering at the University of California Santa Barbara. Phillips’ research interests cut across organization theory, innovation, and technology, and he has published widely in the field of management and organizational studies. Find him on Google Scholar and LinkedIn.

Jamie Hu
University of California Santa Barbara

Jamie Hu provides administrative support to department chair and faculty. Responsible for assisting faculty and visitors with travel arrangements. Processes all departmental travel and entertainment reimbursements.

Christine Moser
Vrije Universiteit Amsterdam

Christine Moser is an Associate Professor of Organization Theory at the School of Business and Economics, Vrije Universiteit Amsterdam. Her research on the role of technology and materiality in social interaction with the aim to help people to organize (more) sustainably. Recent research projects include digital transformation in healthcare, algorithmic decision-making, and the sustainable sourcing of cocoa. Find her on Google Scholar and LinkedIn.

Bridging the Digital Gap: AI Consulting in Education

In a time of accelerating technological development, educational institutions fall behind in utilizing AI’s potential. Schools are at a crossroads: embrace the AI revolution or risk becoming obsolete as pupils seamlessly surf the digital wave. Are our schools equipped to take up the challenge?

Author

Bo de Jong
Vrije Universiteit Amsterdam
20 November 2023

In this day and age, the world is marked by rapid technological development. Subsequently, educational institutions face an unprecedented dilemma: how to tackle the power of artificial intelligence (AI) to stay ahead in the game. I had the opportunity to interview IT consultant, Bauke van der Weijden – who serves secondary schools as a client – to shed light on this matter. The conversation gave me new insights and created a clear picture of the current state of AI in education – or, perhaps more accurately, the lack thereof – as well as the elements that keep educational institutions from embracing and implementing game-changing innovation. This raises an important question: Are schools prepared and willing to educate and mentor students in this modern day of artificial intelligence?

Catching the AI Wave

One of the most striking findings from the interview was the blatant ignorance about digitalisation within schools. Many educational institutions appear to be behind in implementing AI, although this technology continues to transform sectors all around the globe. IT consultant Van der Weijden notes that “Educational institutions are very reactive”. This reactivity is demonstrated by how these institutions often play catch-up, particularly in the area of AI. This brings about a digitalisation knowledge gap since schools find it difficult to keep up with their technically proficient students. The gap in understanding places schools at a noticeable disadvantage, while educational institutions should actually be leading ahead of the curve in digitalisation and educating students for a future in which AI will undoubtedly be a crucial part.

One of the most striking findings from the interview was the blatant ignorance about digitalisation within schools.

Tackling Educational Apathy

Besides the lack of awareness, within educational institutions, there is also a reluctance that prevents the implementation of AI. IT consultant Bauke van der Weijden faces this a lot in his daily work. The staff at schools do not see the benefits of implementing AI, as they have other priorities. This lack of motivation is one of the biggest obstacles to integrating AI into educational institutions. Many teachers and other school staff appear uninterested in the notion of digitalisation, thereby also ignoring the opportunities that AI can bring to them. For IT consultants wanting to introduce AI into the educational industry, the apathy among school staff presents a significant hurdle.

This lack of motivation is one of the biggest obstacles to integrating AI into educational institutions.

AI as a Classroom Cop

Many educational institutions have yet to fully grasp the potential that AI provides in today’s quickly digital environment. Academic integrity worries have been a major factor in many institutions’ decision to introduce AI into the classroom. As a result, the widespread opinion within education is centred around employing AI as a safety precaution, a watchdog looking out for possible risks like plagiarism. Van der Weijden emphasizes that schools primarily consider AI from a security standpoint, utilizing technologies like ChatGPT to screen for potential violations of academic sincerity. But is this the real potential of AI in education? Hardly.

Exploring AI’s Potential

The application of AI in education goes beyond mere plagiarism detection. According to Van der Weijden, “There are also increasingly better AI tools,” not just for detection but also to completely alter how subjects are educated and evaluated. Schools need to be receptive to the opportunities that AI can bring. However, according to Van der Weijden, it is still unpredictable what AI will exactly bring in the future: “The applications are so versatile. All we know is that the ones who start using it first will be the frontrunners in what they do.” While students quickly adopt AI, schools frequently fall behind, behaving in response rather than inventing. Institutions of higher learning require a paradigm shift rather than just trying to “keep up with what the market will do.”

From Digital Guard to Guide

The interview with Van der Weijden sheds light on the need for educational institutions to close the digital gap. AI shouldn’t be viewed as solely a detecting tool. It should be seen as a collaborator with the potential to completely alter how we transmit and consume knowledge. The incorporation of AI into the curriculum is not only advantageous but also necessary given its capacity to offer individualized learning experiences, immediate feedback, and data-driven insights. Schools should begin educating staff members and students on the advantages and disadvantages of this disruptive technology as well as how to use it effectively to navigate and code a new curriculum.

AI shouldn’t be viewed as solely a detecting tool. It should be seen as a collaborator with the potential to completely alter how we transmit and consume knowledge.

Consultant’s Call to Action

What is the real problem? Reactivity versus Proactivity. Educational institutions can no longer afford to remain spectators in a world where AI is pervasive. Schools need to change from being reactionary institutions to becoming proactive ones. Consultants should also take on this proactive role. Before technologies like ChatGPT become commonplace among students, consultants should speak with schools about the possible uses and abuses of these platforms, as Van der Weijden accurately puts it. The difference between what students already know and what they need to know about AI might be bridged with a proactive approach. IT consultants need to do more than just share their technical insight and expertise to increase schools’ adoption of AI. They should take an active role in engaging schools in discussion about the revolutionary possibilities of AI, as well as providing practical learning approaches, like workshops and training. A practical approach will aid educational institutions in gaining the knowledge and confidence to effectively incorporate AI into their curricula.

Educational institutions can no longer afford to remain spectators in a world where AI is pervasive.

Broadening Horizons

Thus, to entirely tackle the power of AI, schools should change their perspective. It’s time to embrace AI as an ally, a tool that can not only discover errors but also improve the educational environment, rather than viewing it only as a danger or a gatekeeper. It’s an appeal to lead, create, and establish a standard for the future rather than merely keeping up with the times. The future of education with AI is not just promising—it is exhilarating—from where we stand right now.

Acknowledgement Statement

This blog is part of the student writing competition in Management Consulting Master Program at the School of Business and Economics.

Author

Bo de Jong
Vrije Universiteit Amsterdam

Bo de Jong is a Master’s student in Business Administration with a specialisation in Management Consulting. She is interested in business processes, change management, and strategic issues along with innovation and digitalisation.

Is Generative AI ethically trustworthy to be used in consulting?

The popularity of Generative AI in Professional Service firms is growing. However, the use of language tools remains limited in taking over human responsibilities due to ethical considerations. Huge investments are made for the development of in-house language tools, but are their capabilities more extensive than a public tool such as ChatGPT? This article will provide you with a practical comparison.

Author

Maxime Majoie
Vrije Universiteit Amsterdam
13 November 2023

Disruption has been a fearful term that is used more and more often when talking about the transformative force of Generative artificial intelligence (GenAI) in Professional Service Firms (PSFs), under which Consultancy firms. Why fearful? The popularity of natural language processing (NPL) technologies – which is a type of Generative AI – is growing increasingly. It has the astonishing ability to generate novel content (images, text, etc.) to answer complicated questions and give solutions with such a speed that the human mind cannot match. GenAI systems make use of models that have been trained on extensive datasets to comprehend the patterns and relationships found in the data. This enables the tool to only produce solutions and answers that are similar to the training data. As a result of this fearful disruption, consultancy firms are increasingly adopting this state-of-the-art technology to assist them in their daily work. Certainly, this will have an impact on the status quo of consultancy firms and their corporate hierarchy. However, is this new technology so advanced that it will replace the trust in human expertise completely? Today, public and in-house language tools are progressively used in consultancy, but not all ethical and practical implications are yet overcome. To analyze this in more detail, a Senior Consultant operative in the Netherlands was asked to share her insights and point out what exactly is fearful using the two types of GenAI.

Public/Open Language Tools

Changing the status quo, is there a transformative paradigm?

The use of publicly owned and openly used language models such as ChatGPT, Bard, Quillbot or Agros Multilingual are growing for consultants’ daily tasks, such as paraphrasing, enhancing sentence structure, translations and the writing of introductions. Indeed, these technologies greatly help consultants do the same amount of work, in a shorter amount of time. You need less and less people to complete the tasks: The status quo of consulting is bound to change. The interview with a senior consultant confirmed this transformative paradigm: “Consultants will likely lose their jobs; as a result, layers of the corporate hierarchy will get thinner, but they won’t disappear.” Your level in the corporate hierarchy is a representation of your responsibility, based on your expertise and build-up trust, in the firm and by your clients.

Consultants will likely lose their jobs; as a result, layers of the corporate hierarchy will get thinner, but they won’t disappear.

The operational gap

Clearly, an operational gap should be identified between the rather simple and efficient tasks that GenAI is allowed to perform, and the tasks for which the human eye remains essential. Ethical considerations are the reason that this gap still exists. The use of GenAI remains an ethical uncertainty, as its trustworthiness cannot be determined or guaranteed. This is the reason that the public GenAI operates with a confidentiality risk. At the firm of an interviewed expert: “Open language tools, such as ChatGPT and Google Translate are not allowed to be used. Because it can give too much client information, even sharing the smallest bits, is not allowed.”

Open language tools, such as ChatGPT and Google Translate are not allowed to be used. Because it can give too much client information, even sharing the smallest bits, is not allowed.

All data you enter, you are giving consent to become available training data, on which other answers and possible solutions generated by that tool will be based. In addition, opacity is the lack of transparency that AI algorithms are able to provide, regarding generating operations and reasoning to their users. This contradicts directly a core responsibility of the consultant, being able to explain your sources and information at all stages of their consulting research. A connecting hazard to this is the mixture of factual and non-factual data that makes the credibility of information generated through language tools questionable. Evidently, the use of open GenAI is still quite limited in presumably taking over all tasks of a consultant. The human eye is required to be the controlling factor ensuring the trustworthiness and accountability of the firm towards clients.

Private/in-house Language Tools

In-house language tools:

Bullshit in, is bullshit out…” as was quoted by the interviewed expert. She reflected upon the tendency of PSFs to make large investments in GenAI in-house tools. For instance, PwC has invested 1 billion dollars in their own (pilot) ChatGPT and EY 1.4 billion dollars in an AI platform and announced future spending in their language model tools. Manifestly, creating a more contained, secured, transparent, and controllable GenAI tool that can solve most security and confidentiality issues, while generating solutions on valid data. As confirmed by the interviewed expert, this disruption would hypothetically have a great impact on the transformative paradigm as it could close the operational gap. Noteworthy, the ability to have this great impact will be limited to the individual large (international) PSFs that have abundant sets of data and the funds to create, manage and improve these language models continuously. But again, not without (ethical) risks.

A limited investment

All language models will be as big or limited as their dataset itself is. This means that the quality of your in-house generated solutions will be based and trained on the companies’ data only. Hence, the idea of bullshit in is bullshit out. To add, external factors and awareness are excluded to keep the data contained, limiting the ability to research or find undiscovered information, a quality for which a public language tool like ChatGPT is explicitly valued. Most disappointingly, it will depend on client confidentiality agreements whether their data can be shared (anonymously) with the in-house tool. “With current strict security and confidentiality agreements, sometimes team members are not allowed to view all data available” let alone all employees of a large (international) consultancy firm. Despite the promising efforts by large PSFs, overcoming a predominant confidentiality problem with an in-house trustworthy GenAI tool is not yet guaranteed.

With current strict security and confidentiality agreements, sometimes team members are not allowed to view all data available

In conclusion, the application of Generative Artificial Intelligence (GenAI) in consulting raises both the prospect of a paradigm shift and ethical issues. Public language tools improve productivity and effectiveness but pose concerns about confidentiality and transparency. Large corporations are investing a lot of money in in-house technologies to address the operational gap and offer solutions that are transparently and securely regulated. The phrase “bullshit in, bullshit out” perfectly captures the problem of only using data that is internally confined. In-house GenAI technologies have a lot of work to do before they can be fully trusted and accounted for, despite their potential. To maintain its credibility while negotiating the changing context of human-machine collaboration, the consultant industry must still strike a balance between productivity and ethical considerations, to ensure its trustworthiness.

Acknowledgement Statement: This blog is part of the student competition in Management Consulting Master Program at the School of Business and Economics.

Author

Maxime Majoie
Vrije Universiteit Amsterdam

Maxime Majoie is a Belgian student at the School of Business and Economics of the Vrije Universiteit of Amsterdam. She is currently pursuing a MSc BA with a specialisation in Management Consulting. She has a background in Humanities studying International Studies at Leiden University.

AI and Consulting: Should Algorithms Find Their Rhythm, or not?

Amidst the rapid AI progress, consultancies have a dilemma: using AI’s efficiency while paying attention to the environmental impact. Experts warn about relying on AI, emphasizing its lack of human empathy, and understanding, crucial in consultancy. Yet, a sustainable future lies in balancing AI’s benefits with environmental responsibility. Consultants must retain their expertise, prioritizing analysis and client relationships, steering the industry towards an environment-friendly AI-supported future.

Author

Arne van Faassen
Vrije Universiteit Amsterdam
6 November 2023

The world as we know it today is developing at a rapid pace, where technological advancements appear like wildflowers, transforming our technological landscape. One of the most actual developments has been that of Artificial Intelligence (AI), with tools like OpenAI’s chatbot ‘ChatGTP’ gaining a bigger user base by the day, including consultants not shying away from using it. With AI being able to perform lightning-fast idea formulation and data analytics on big data, it has become interesting for consultancies to explore developing their own AI tool.

However, as with every new and hot topic that seems to conquer our world and revolutionize it with its benefits, also AI comes with certain complications. Firstly, a complication that is often not spoken about is the energy usage of AI. The development and maintaining of AI systems comes not only with financial costs but also with environmental costs. Secondly, the aspect of human factors should not be overlooked. Especially with the work of consultants, in which thoroughly considered solutions are formed through a deep and detailed understanding of the problem and situation at hand.

So, an important question arises when thinking of the future of consultancies in light of AI implementation. Should consultancies explore the possibility of using AI to take over their workload and take the environmental effects for granted? To answer this question, I interviewed Ton Metselaar, experienced management consultant at a successful business and technology consulting firm. The conclusion is that we cannot let this rapid development run on its own without looking at the consequences. Every innovation should be assessed on its environmental impact and ways to limit the energy usage should be explored. Furthermore, the human factor in consultancy work is crucial due to their expertise and relationship with the client and this should stay at the core of the value they bring to clients.

Exploring the benefits AI can offer

The speed and completeness of the answers AI bots currently provide is impressive, to say the least. The convenience of having a little AI helper when doing your work can help to initiate ideas and save time when doing routine tasks. But the future lies further, where consultancies will try to reap the benefits of using AI to analyse huge chunks of data within seconds, where normal data analysis would take significantly longer. These ways of saving time come with saving costs as well of course, making it interesting to explore the possibilities.

Unravelling the environmental threats of AI

Yet, where people picture a world where AI can perform all their tasks and we do not have to do any work ourselves anymore, it is not without drawbacks. Like other technological advancements, AI models have a certain carbon footprint. The humongous amount of data that is stored by data center servers requires a lot of energy and water usage to run the servers, equipment, and the cooling systems. Ton Metselaar explains that it’s like calculating the transactions for Bitcoin, which is known to consume large amounts of energy. AI models produce CO2 emissions in three ways. The initial training of the model is the first and most energy consuming way, with researchers calculating the CO2 emission at 626,00 tons of CO2 which can be compared to the CO2 emissions of 119,000 cars (Strubell et al., 2019). Furthermore, the carbon footprint of keeping the generative models running and letting people interact with it are slightly below these emissions. The last way is through the updating and tailoring of the model to a consultancies’ specific information, which uses the least amount energy.

With the enormous growth of AI, the energy usage of these models will rise as well and have a threatening impact on our environment.

Analysing the importance of the human factor

Next to the fact that AI has certain drawbacks, it also contains flaws. As Ton Metselaar accurately puts it: “AI can make a lot of suggestions and have ideas on how to make things simpler but as a model it can never be completely without mistakes”. AI might interpret certain factors wrong or miss out on contextually important details which can lead to wrong outcomes and bad solutions. It is therefore not a reliable source. In addition, the absence of human empathy and understanding is also lacking with AI since they are unable to genuinely understand the client concerns. Clients always seek an emotional and human understanding that AI cannot offer. This can make clients feel like they are treated as data points rather than individuals with unique requirements and specific concerns to be addressed. AI cannot establish trust or build reputation with clients through meaningful conversations, active listening, and the human touch that comes from genuine interactions, which are the cornerstones of client relationships.

Proposing a sustainable way of AI usage

But what should our future look like then?

The ideal future is that of a sustainable one. Not only sustainable for the environment but also sustainable for the careers of consultants.

Ton Metselaar continues to explain that for a good consultant, it is essential to be an expert in their field. Therefore, it is vital that they can carefully listen to what a client needs and what specific situation they are in. Subsequently, this information should be mindfully analysed to come to the optimal solution. Ultimately, this needs to be properly communicated to the client, with which the relationship and the maintaining of that relationship is indispensable. All these aspects show that AI cannot replace the human factors consultants bring to the table. Furthermore, with our world being threatened by drastic environmental issues, “We should not only save the planet, but also save ourselves. The climate will likely kill us before it will kill the planet”, as Ton Metselaar alarmed in the interview. When looking in further detail at the solutions we should not aim for every consultancy to build their own specialised AI model but rather build on existing ones to limit energy usage. Moreover, it is needed to incorporate AI usage in the monitoring of a company’s emissions to be aware of its magnitude. To really seek for responsible improvement, environmental-friendly AI developments should be sought instead of creating completely personalized ones.

Reference list

Kumar, A., & Davenport, T. (2023, July 20). How to Make Generative AI Greener. Harvard Business Review. https://hbr.org/2023/07/how-to-make-generative-ai-greener?autocomplete=true

Strubell, E., Ganesh, A., & McCallum, A. (2019). Energy and Policy Considerations for Modern Deep Learning Research. Proceedings of the AAAI Conference on Artificial Intelligence, 34(09), 13693–13696. https://doi.org/10.1609/aaai.v34i09.7123

Acknowledgement Statement

This blog is part of the student writing competition in Management Consulting Master Program at the School of Business and Economics

Author

Arne van Faassen
Vrije Universiteit Amsterdam

Arne van Faassen is a 24-year-old Management Consulting Master’s student at the Vrije Universiteit, currently based in Amsterdam. After obtaining his Bachelor’s degree in Business Administration at the Univeristy of Amsterdam, he gained experience as a consultant working at Page Personnel in the field of IT Recruitment. His interests lie in entrepreneurship, strategy, and optimization to help organizations unlock their full potential.

Unlocking the Future of Consultancy: Building In-House AI Languages with a Deep Learning Expert

The transformative power of AI is reshaping our world. Consultancy firms are innovating by creating AI languages, a complex process of data, fine-tuning models, and continuous improvement. As a result, major consultancy companies are likely to have integrated AI models within five years, enhancing employee performance and fostering data-driven decision-making.

Author

Pietro Rapetti
Vrije Universiteit Amsterdam
30 October 2023

AI is changing the world

AI is changing the world. Maybe, experiencing these changes from the inside, as citizens of the game changer superpowers, we don’t really realize the violent impact of disruption on our daily life. The violent impact will come when, while cooking dinner for our beloved family we realize that an Artificial Intelligence, linked to the watch of our children, through monitoring pulse rate, body heat and oxygen in the blood, will be able to recommend the best cartoon for them to watch or the best vegetable to eat. And just at that point we will really think “ok, what’s happening here?”. But for now, seeing these changes in “rallenty”, reading about one “small” innovation at a time (just because we can’t read simultaneously more than one word at a time), doesn’t really seem to shake us. Well, long story short, the world (and the business world firstly) is changing and it’s changing fast.

When speaking about innovative practices, people can lag behind, Businesses can’t

We, as people, can surely remain back a few steps without risking bankruptcy. Businesses can’t. Nevertheless, when the businesses on which we focus are consultancy societies that, in order to survive, formulate strategies to make other societies stand out, within a sea of competitive societies backed up by other societies. In this mess,it is inevitable that consultancy firms must not only keep up with innovation, but even direct and guide it. Regarding this last statement I’m pretty sure they will. How? Well for two main reasons, they are constantly immersed in a network of stakeholders (clients and partners) who are informed or need to be informed regarding cutting-edge technologies and trends and because of their human-centered nature (with relatively low fixed assets) that allows them to implement new methodologies or sell new products in a fast and nimble way. Now, when we speak of implementing new methodologies and selling new products, what are we referring to? Well, in this article, we are referring to implementing and selling Artificial Intelligence languages.

It is inevitable that consultancy firms must not only keep up with innovation, but even direct and guide it.

How can new languages be created? A chat with Carlo

In order to understand how new languages are created, I interviewed my long-time friend Carlo, Master of Science in Machine Learning and Research Engineer at CISPA Helmholtz Center for Information Security. In simple words Carlos’s daily job involves adjusting leaks and improving the state of the art of AI and deep learning models. I interviewed Carlo in order to understand what it means for Consultancy Firms to create their own AI language and what are the variables involved. Speaking with Carlo I understood that there are two main protagonists in this process, the data (the oil) and the model (the engine). While the parameters of the models can be obtained and used by all, since societies such as Meta make them public and useable (we are speaking of pre-trained, large and generalized language models), the data needed to make the model speak the “consultancy language” (through the “fine-tuning” process) is more difficult to be found. For this reason it is more likely that big companies such as PwC, EY, KPMG, that already have tons of terabytes of in-house data can more easily get the engine to work. Small companies instead, are obligated to download it from public online sources such as Kaggle or Wiki English that still provide great quantities of good quality data. This doesn’t mean that small companies can’t do it, it just means that for them it’s probably going to be more costly. So once the consultancy company in object has on the one hand Meta’s pre-trained AI model and, on the other hand, the data, Carlo takes the field. His role is to use the data collected making the model speak the strategic consulting or marketing consulting or HR consulting language (depending on the data collected), delivering a product that can increase the efficiency of work within these companies. And how does he do this? Through the fine-tuning; a process that consists in training the pre-trained, large and generalized language model and making it better suited for a particular application.

While the parameters of the models can be obtained and used by all, since societies such as Meta make them public, the data needed to make the model speak the “consultancy language” is more difficult to be found.

Deep diving the apparently impossible process

This process involves the use of more than 20 GPUs (Graphic Processing Units, the billions of operations needed to train these models are performed on specialized computers built for quick and parallel processing). Once the model has been trained to serve a particular mansion or “speak” a particular language, the model can be prompted and used to generate new text or voice. What happens in this second step (called inference or prediction) is often referred to as a black box, since the human mind can’t really understand what is going on under the hood and why exactly the model is generating a single specific output (but this is another story). So once the fine tuned model is created, it needs to be put in production. Putting a fine-tuned model into production is a complex process. Through CI/CD pipelines, which means Continuous Improvement and Continuous Development, the model is updated and trained with data and questions, and every time a new output is generated, feedback on that output is feeded back to the model, to ensure the full lifecycle of the CI/CD pipeline. This helps ensure the model works effectively and doesn’t give unwanted answers (such as racist answers or information on how to conduct illegal activities). Finally, after completing this phase, the model is ready to be used by the consultancy firms (while needing continuous training with new and updated data).

Through CI/CD pipelines, which means Continuous Improvement and Continuous Development, the model is updated and trained with data and questions.

The outputs of the impossible process, the case of Lilli

This process, which seems long and expensive, is actually doable, thanks to the work of specialized engineers and consultants, such as Carlo, that concludes the interview making his own prediction: “I think the major consultancy companies, in five years, will all have ad hoc fine tuned AI models integrated into their systems. Their employees will use them to boost their performance working in a more efficient and data driven environment.” And someone already started! McKinsey, this year launched “Lilli,” an own generative AI solution that aggregates all Mckinsey’s knowledge and capabilities in one place. The data used to train Lilli relies on more than 100.000 documents and interview transcripts that the firm collected during years of hard work. For Adi Pradhan, an associate partner who specializes in technology strategy and transformations, Lilli is “a thought-sparring partner” ahead of meetings and presentations. He uses Lilli to look for weaknesses in arguments and anticipate questions that may arise, to tutor himself on new topics and make connections between different areas on the projects.

Acknowledgement Statement

This blog is part of the student writing competition in Management Consulting Master Program at the School of Business and Economics.

References

Author

Pietro Rapetti
Vrije Universiteit Amsterdam

Pietro Rapetti is a master’s student at Vrije University School of Business and Economics. After completing a Bachelor’s in Economics and Management for Art, Culture, and Communication at Bocconi University in Milan, he joined the Innovation Team of PwC Italy. Among other responsibilities, he was involved in the creation of startup incubators and accelerators.