Bridging the Digital Gap: AI Consulting in Education

In a time of accelerating technological development, educational institutions fall behind in utilizing AI’s potential. Schools are at a crossroads: embrace the AI revolution or risk becoming obsolete as pupils seamlessly surf the digital wave. Are our schools equipped to take up the challenge?

Author

Bo de Jong
Vrije Universiteit Amsterdam
20 November 2023

In this day and age, the world is marked by rapid technological development. Subsequently, educational institutions face an unprecedented dilemma: how to tackle the power of artificial intelligence (AI) to stay ahead in the game. I had the opportunity to interview IT consultant, Bauke van der Weijden – who serves secondary schools as a client – to shed light on this matter. The conversation gave me new insights and created a clear picture of the current state of AI in education – or, perhaps more accurately, the lack thereof – as well as the elements that keep educational institutions from embracing and implementing game-changing innovation. This raises an important question: Are schools prepared and willing to educate and mentor students in this modern day of artificial intelligence?

Catching the AI Wave

One of the most striking findings from the interview was the blatant ignorance about digitalisation within schools. Many educational institutions appear to be behind in implementing AI, although this technology continues to transform sectors all around the globe. IT consultant Van der Weijden notes that “Educational institutions are very reactive”. This reactivity is demonstrated by how these institutions often play catch-up, particularly in the area of AI. This brings about a digitalisation knowledge gap since schools find it difficult to keep up with their technically proficient students. The gap in understanding places schools at a noticeable disadvantage, while educational institutions should actually be leading ahead of the curve in digitalisation and educating students for a future in which AI will undoubtedly be a crucial part.

One of the most striking findings from the interview was the blatant ignorance about digitalisation within schools.

Tackling Educational Apathy

Besides the lack of awareness, within educational institutions, there is also a reluctance that prevents the implementation of AI. IT consultant Bauke van der Weijden faces this a lot in his daily work. The staff at schools do not see the benefits of implementing AI, as they have other priorities. This lack of motivation is one of the biggest obstacles to integrating AI into educational institutions. Many teachers and other school staff appear uninterested in the notion of digitalisation, thereby also ignoring the opportunities that AI can bring to them. For IT consultants wanting to introduce AI into the educational industry, the apathy among school staff presents a significant hurdle.

This lack of motivation is one of the biggest obstacles to integrating AI into educational institutions.

AI as a Classroom Cop

Many educational institutions have yet to fully grasp the potential that AI provides in today’s quickly digital environment. Academic integrity worries have been a major factor in many institutions’ decision to introduce AI into the classroom. As a result, the widespread opinion within education is centred around employing AI as a safety precaution, a watchdog looking out for possible risks like plagiarism. Van der Weijden emphasizes that schools primarily consider AI from a security standpoint, utilizing technologies like ChatGPT to screen for potential violations of academic sincerity. But is this the real potential of AI in education? Hardly.

Exploring AI’s Potential

The application of AI in education goes beyond mere plagiarism detection. According to Van der Weijden, “There are also increasingly better AI tools,” not just for detection but also to completely alter how subjects are educated and evaluated. Schools need to be receptive to the opportunities that AI can bring. However, according to Van der Weijden, it is still unpredictable what AI will exactly bring in the future: “The applications are so versatile. All we know is that the ones who start using it first will be the frontrunners in what they do.” While students quickly adopt AI, schools frequently fall behind, behaving in response rather than inventing. Institutions of higher learning require a paradigm shift rather than just trying to “keep up with what the market will do.”

From Digital Guard to Guide

The interview with Van der Weijden sheds light on the need for educational institutions to close the digital gap. AI shouldn’t be viewed as solely a detecting tool. It should be seen as a collaborator with the potential to completely alter how we transmit and consume knowledge. The incorporation of AI into the curriculum is not only advantageous but also necessary given its capacity to offer individualized learning experiences, immediate feedback, and data-driven insights. Schools should begin educating staff members and students on the advantages and disadvantages of this disruptive technology as well as how to use it effectively to navigate and code a new curriculum.

AI shouldn’t be viewed as solely a detecting tool. It should be seen as a collaborator with the potential to completely alter how we transmit and consume knowledge.

Consultant’s Call to Action

What is the real problem? Reactivity versus Proactivity. Educational institutions can no longer afford to remain spectators in a world where AI is pervasive. Schools need to change from being reactionary institutions to becoming proactive ones. Consultants should also take on this proactive role. Before technologies like ChatGPT become commonplace among students, consultants should speak with schools about the possible uses and abuses of these platforms, as Van der Weijden accurately puts it. The difference between what students already know and what they need to know about AI might be bridged with a proactive approach. IT consultants need to do more than just share their technical insight and expertise to increase schools’ adoption of AI. They should take an active role in engaging schools in discussion about the revolutionary possibilities of AI, as well as providing practical learning approaches, like workshops and training. A practical approach will aid educational institutions in gaining the knowledge and confidence to effectively incorporate AI into their curricula.

Educational institutions can no longer afford to remain spectators in a world where AI is pervasive.

Broadening Horizons

Thus, to entirely tackle the power of AI, schools should change their perspective. It’s time to embrace AI as an ally, a tool that can not only discover errors but also improve the educational environment, rather than viewing it only as a danger or a gatekeeper. It’s an appeal to lead, create, and establish a standard for the future rather than merely keeping up with the times. The future of education with AI is not just promising—it is exhilarating—from where we stand right now.

Acknowledgement Statement

This blog is part of the student writing competition in Management Consulting Master Program at the School of Business and Economics.

Author

Bo de Jong
Vrije Universiteit Amsterdam

Bo de Jong is a Master’s student in Business Administration with a specialisation in Management Consulting. She is interested in business processes, change management, and strategic issues along with innovation and digitalisation.

Is Generative AI ethically trustworthy to be used in consulting?

The popularity of Generative AI in Professional Service firms is growing. However, the use of language tools remains limited in taking over human responsibilities due to ethical considerations. Huge investments are made for the development of in-house language tools, but are their capabilities more extensive than a public tool such as ChatGPT? This article will provide you with a practical comparison.

Author

Maxime Majoie
Vrije Universiteit Amsterdam
13 November 2023

Disruption has been a fearful term that is used more and more often when talking about the transformative force of Generative artificial intelligence (GenAI) in Professional Service Firms (PSFs), under which Consultancy firms. Why fearful? The popularity of natural language processing (NPL) technologies – which is a type of Generative AI – is growing increasingly. It has the astonishing ability to generate novel content (images, text, etc.) to answer complicated questions and give solutions with such a speed that the human mind cannot match. GenAI systems make use of models that have been trained on extensive datasets to comprehend the patterns and relationships found in the data. This enables the tool to only produce solutions and answers that are similar to the training data. As a result of this fearful disruption, consultancy firms are increasingly adopting this state-of-the-art technology to assist them in their daily work. Certainly, this will have an impact on the status quo of consultancy firms and their corporate hierarchy. However, is this new technology so advanced that it will replace the trust in human expertise completely? Today, public and in-house language tools are progressively used in consultancy, but not all ethical and practical implications are yet overcome. To analyze this in more detail, a Senior Consultant operative in the Netherlands was asked to share her insights and point out what exactly is fearful using the two types of GenAI.

Public/Open Language Tools

Changing the status quo, is there a transformative paradigm?

The use of publicly owned and openly used language models such as ChatGPT, Bard, Quillbot or Agros Multilingual are growing for consultants’ daily tasks, such as paraphrasing, enhancing sentence structure, translations and the writing of introductions. Indeed, these technologies greatly help consultants do the same amount of work, in a shorter amount of time. You need less and less people to complete the tasks: The status quo of consulting is bound to change. The interview with a senior consultant confirmed this transformative paradigm: “Consultants will likely lose their jobs; as a result, layers of the corporate hierarchy will get thinner, but they won’t disappear.” Your level in the corporate hierarchy is a representation of your responsibility, based on your expertise and build-up trust, in the firm and by your clients.

Consultants will likely lose their jobs; as a result, layers of the corporate hierarchy will get thinner, but they won’t disappear.

The operational gap

Clearly, an operational gap should be identified between the rather simple and efficient tasks that GenAI is allowed to perform, and the tasks for which the human eye remains essential. Ethical considerations are the reason that this gap still exists. The use of GenAI remains an ethical uncertainty, as its trustworthiness cannot be determined or guaranteed. This is the reason that the public GenAI operates with a confidentiality risk. At the firm of an interviewed expert: “Open language tools, such as ChatGPT and Google Translate are not allowed to be used. Because it can give too much client information, even sharing the smallest bits, is not allowed.”

Open language tools, such as ChatGPT and Google Translate are not allowed to be used. Because it can give too much client information, even sharing the smallest bits, is not allowed.

All data you enter, you are giving consent to become available training data, on which other answers and possible solutions generated by that tool will be based. In addition, opacity is the lack of transparency that AI algorithms are able to provide, regarding generating operations and reasoning to their users. This contradicts directly a core responsibility of the consultant, being able to explain your sources and information at all stages of their consulting research. A connecting hazard to this is the mixture of factual and non-factual data that makes the credibility of information generated through language tools questionable. Evidently, the use of open GenAI is still quite limited in presumably taking over all tasks of a consultant. The human eye is required to be the controlling factor ensuring the trustworthiness and accountability of the firm towards clients.

Private/in-house Language Tools

In-house language tools:

Bullshit in, is bullshit out…” as was quoted by the interviewed expert. She reflected upon the tendency of PSFs to make large investments in GenAI in-house tools. For instance, PwC has invested 1 billion dollars in their own (pilot) ChatGPT and EY 1.4 billion dollars in an AI platform and announced future spending in their language model tools. Manifestly, creating a more contained, secured, transparent, and controllable GenAI tool that can solve most security and confidentiality issues, while generating solutions on valid data. As confirmed by the interviewed expert, this disruption would hypothetically have a great impact on the transformative paradigm as it could close the operational gap. Noteworthy, the ability to have this great impact will be limited to the individual large (international) PSFs that have abundant sets of data and the funds to create, manage and improve these language models continuously. But again, not without (ethical) risks.

A limited investment

All language models will be as big or limited as their dataset itself is. This means that the quality of your in-house generated solutions will be based and trained on the companies’ data only. Hence, the idea of bullshit in is bullshit out. To add, external factors and awareness are excluded to keep the data contained, limiting the ability to research or find undiscovered information, a quality for which a public language tool like ChatGPT is explicitly valued. Most disappointingly, it will depend on client confidentiality agreements whether their data can be shared (anonymously) with the in-house tool. “With current strict security and confidentiality agreements, sometimes team members are not allowed to view all data available” let alone all employees of a large (international) consultancy firm. Despite the promising efforts by large PSFs, overcoming a predominant confidentiality problem with an in-house trustworthy GenAI tool is not yet guaranteed.

With current strict security and confidentiality agreements, sometimes team members are not allowed to view all data available

In conclusion, the application of Generative Artificial Intelligence (GenAI) in consulting raises both the prospect of a paradigm shift and ethical issues. Public language tools improve productivity and effectiveness but pose concerns about confidentiality and transparency. Large corporations are investing a lot of money in in-house technologies to address the operational gap and offer solutions that are transparently and securely regulated. The phrase “bullshit in, bullshit out” perfectly captures the problem of only using data that is internally confined. In-house GenAI technologies have a lot of work to do before they can be fully trusted and accounted for, despite their potential. To maintain its credibility while negotiating the changing context of human-machine collaboration, the consultant industry must still strike a balance between productivity and ethical considerations, to ensure its trustworthiness.

Acknowledgement Statement: This blog is part of the student competition in Management Consulting Master Program at the School of Business and Economics.

Author

Maxime Majoie
Vrije Universiteit Amsterdam

Maxime Majoie is a Belgian student at the School of Business and Economics of the Vrije Universiteit of Amsterdam. She is currently pursuing a MSc BA with a specialisation in Management Consulting. She has a background in Humanities studying International Studies at Leiden University.

AI and Consulting: Should Algorithms Find Their Rhythm, or not?

Amidst the rapid AI progress, consultancies have a dilemma: using AI’s efficiency while paying attention to the environmental impact. Experts warn about relying on AI, emphasizing its lack of human empathy, and understanding, crucial in consultancy. Yet, a sustainable future lies in balancing AI’s benefits with environmental responsibility. Consultants must retain their expertise, prioritizing analysis and client relationships, steering the industry towards an environment-friendly AI-supported future.

Author

Arne van Faassen
Vrije Universiteit Amsterdam
6 November 2023

The world as we know it today is developing at a rapid pace, where technological advancements appear like wildflowers, transforming our technological landscape. One of the most actual developments has been that of Artificial Intelligence (AI), with tools like OpenAI’s chatbot ‘ChatGTP’ gaining a bigger user base by the day, including consultants not shying away from using it. With AI being able to perform lightning-fast idea formulation and data analytics on big data, it has become interesting for consultancies to explore developing their own AI tool.

However, as with every new and hot topic that seems to conquer our world and revolutionize it with its benefits, also AI comes with certain complications. Firstly, a complication that is often not spoken about is the energy usage of AI. The development and maintaining of AI systems comes not only with financial costs but also with environmental costs. Secondly, the aspect of human factors should not be overlooked. Especially with the work of consultants, in which thoroughly considered solutions are formed through a deep and detailed understanding of the problem and situation at hand.

So, an important question arises when thinking of the future of consultancies in light of AI implementation. Should consultancies explore the possibility of using AI to take over their workload and take the environmental effects for granted? To answer this question, I interviewed Ton Metselaar, experienced management consultant at a successful business and technology consulting firm. The conclusion is that we cannot let this rapid development run on its own without looking at the consequences. Every innovation should be assessed on its environmental impact and ways to limit the energy usage should be explored. Furthermore, the human factor in consultancy work is crucial due to their expertise and relationship with the client and this should stay at the core of the value they bring to clients.

Exploring the benefits AI can offer

The speed and completeness of the answers AI bots currently provide is impressive, to say the least. The convenience of having a little AI helper when doing your work can help to initiate ideas and save time when doing routine tasks. But the future lies further, where consultancies will try to reap the benefits of using AI to analyse huge chunks of data within seconds, where normal data analysis would take significantly longer. These ways of saving time come with saving costs as well of course, making it interesting to explore the possibilities.

Unravelling the environmental threats of AI

Yet, where people picture a world where AI can perform all their tasks and we do not have to do any work ourselves anymore, it is not without drawbacks. Like other technological advancements, AI models have a certain carbon footprint. The humongous amount of data that is stored by data center servers requires a lot of energy and water usage to run the servers, equipment, and the cooling systems. Ton Metselaar explains that it’s like calculating the transactions for Bitcoin, which is known to consume large amounts of energy. AI models produce CO2 emissions in three ways. The initial training of the model is the first and most energy consuming way, with researchers calculating the CO2 emission at 626,00 tons of CO2 which can be compared to the CO2 emissions of 119,000 cars (Strubell et al., 2019). Furthermore, the carbon footprint of keeping the generative models running and letting people interact with it are slightly below these emissions. The last way is through the updating and tailoring of the model to a consultancies’ specific information, which uses the least amount energy.

With the enormous growth of AI, the energy usage of these models will rise as well and have a threatening impact on our environment.

Analysing the importance of the human factor

Next to the fact that AI has certain drawbacks, it also contains flaws. As Ton Metselaar accurately puts it: “AI can make a lot of suggestions and have ideas on how to make things simpler but as a model it can never be completely without mistakes”. AI might interpret certain factors wrong or miss out on contextually important details which can lead to wrong outcomes and bad solutions. It is therefore not a reliable source. In addition, the absence of human empathy and understanding is also lacking with AI since they are unable to genuinely understand the client concerns. Clients always seek an emotional and human understanding that AI cannot offer. This can make clients feel like they are treated as data points rather than individuals with unique requirements and specific concerns to be addressed. AI cannot establish trust or build reputation with clients through meaningful conversations, active listening, and the human touch that comes from genuine interactions, which are the cornerstones of client relationships.

Proposing a sustainable way of AI usage

But what should our future look like then?

The ideal future is that of a sustainable one. Not only sustainable for the environment but also sustainable for the careers of consultants.

Ton Metselaar continues to explain that for a good consultant, it is essential to be an expert in their field. Therefore, it is vital that they can carefully listen to what a client needs and what specific situation they are in. Subsequently, this information should be mindfully analysed to come to the optimal solution. Ultimately, this needs to be properly communicated to the client, with which the relationship and the maintaining of that relationship is indispensable. All these aspects show that AI cannot replace the human factors consultants bring to the table. Furthermore, with our world being threatened by drastic environmental issues, “We should not only save the planet, but also save ourselves. The climate will likely kill us before it will kill the planet”, as Ton Metselaar alarmed in the interview. When looking in further detail at the solutions we should not aim for every consultancy to build their own specialised AI model but rather build on existing ones to limit energy usage. Moreover, it is needed to incorporate AI usage in the monitoring of a company’s emissions to be aware of its magnitude. To really seek for responsible improvement, environmental-friendly AI developments should be sought instead of creating completely personalized ones.

Reference list

Kumar, A., & Davenport, T. (2023, July 20). How to Make Generative AI Greener. Harvard Business Review. https://hbr.org/2023/07/how-to-make-generative-ai-greener?autocomplete=true

Strubell, E., Ganesh, A., & McCallum, A. (2019). Energy and Policy Considerations for Modern Deep Learning Research. Proceedings of the AAAI Conference on Artificial Intelligence, 34(09), 13693–13696. https://doi.org/10.1609/aaai.v34i09.7123

Acknowledgement Statement

This blog is part of the student writing competition in Management Consulting Master Program at the School of Business and Economics

Author

Arne van Faassen
Vrije Universiteit Amsterdam

Arne van Faassen is a 24-year-old Management Consulting Master’s student at the Vrije Universiteit, currently based in Amsterdam. After obtaining his Bachelor’s degree in Business Administration at the Univeristy of Amsterdam, he gained experience as a consultant working at Page Personnel in the field of IT Recruitment. His interests lie in entrepreneurship, strategy, and optimization to help organizations unlock their full potential.