The Algorithmic Ascent: Charting a Human-Centric Trajectory for Artificial Intelligence in Higher Education by 2025

Institution: University of Bareda Queen, Hochschulstrasse 4, 3012 Bern, Switzerland Date: May 2025

Abstract As of May 2025, Artificial Intelligence (AI) has transitioned from a nascent technology to a fundamental pillar within higher education, presenting transformative potential alongside critical challenges. The global AI in education market has seen exponential growth, underscoring its pervasive influence. This paper, from the perspective of the University of Bareda Queen (UBQ), examines the multifaceted impact of AI on higher education. It articulates UBQ’s vision for a human-centric approach, emphasizing the paramount importance of ethical considerations, pedagogical innovation, and the cultivation of comprehensive AI literacy for both students and educators. Key research directions are explored, including the development of explainable AI (XAI) for learning analytics, understanding the cognitive and affective impacts of AI tools, establishing robust ethical frameworks, and designing novel pedagogical models. The paper highlights a discernible gap between the rapid, bottom-up adoption of AI tools by individuals and the slower, more deliberative pace of top-down institutional strategic integration, creating a need for HEIs to become ‘AI competent’. UBQ is committed to navigating this complex landscape, aiming to shape a future where AI augments human capabilities, fosters critical thinking, and contributes to equitable and impactful education for societal benefit.

1. Introduction: The AI Imperative in 2025 and the University of Bareda Queen’s Vision

1.1 The Pervasive Influence of AI by May 2025

By May 2025, Artificial Intelligence is no longer a futuristic concept but an integral component of the global economic and societal fabric, with the education sector being no exception. The market for AI in education has witnessed substantial expansion, with global revenues reaching billions of pounds sterling, indicative of its deep penetration. Current data reveals that over 47% of educational leaders incorporate AI tools into their daily routines, and a significant 60% of teaching staff actively utilise AI in their pedagogical practices. The Stanford HAI AI Index Report 2025 further corroborates this trend, highlighting AI’s increasing embeddedness in everyday life and the record levels of investment it attracts. This pervasive influence underscores the critical need for higher education institutions (HEIs) to strategically engage with AI, not merely as a technological novelty, but as a core element shaping their operational and strategic landscape. The rapid proliferation of AI tools signifies a fundamental paradigm shift, compelling a re-evaluation of traditional educational models and practices.  

1.2 The Critical Role of Higher Education Institutions (HEIs)

Higher Education Institutions stand at the nexus of AI development, research, and the cultivation of an AI-literate populace. They bear the responsibility of preparing the next generation of AI professionals and ensuring that all graduates possess the competencies to navigate an AI-suffused world. This entails significant challenges in adapting curricula, pedagogical methodologies, and assessment practices to effectively integrate AI. International bodies such as UNESCO have underscored the pivotal role of HEIs in providing ethical guidance, promoting responsible AI deployment, and spearheading teacher training initiatives to harness AI’s potential beneficially. Consequently, HEIs are not passive consumers of AI technology; they are active agents in shaping its trajectory, its ethical boundaries, and its societal impact. Their strategic response to the AI imperative will profoundly influence the quality of the future workforce, the ethical application of AI, and the overall societal assimilation of these powerful technologies.  

1.3 The University of Bareda Queen’s (UBQ) Vision and Commitment

The University of Bareda Queen (UBQ), cognisant of its rich academic heritage and its location within a nation at the forefront of innovation, embraces the transformative potential of AI in education. Building upon established strengths in Computer Science, Artificial Intelligence in Medicine, and a commitment to interdisciplinary research , UBQ envisions a future where AI serves as a powerful catalyst for enhancing learning, fostering research, and promoting societal well-being. Our commitment is to pioneer the responsible, ethical, and impactful integration of AI across all facets of the academic enterprise. This paper articulates UBQ’s perspective on the evolving AI landscape in higher education as of May 2025, outlining key considerations, strategic research directions, and our dedication to a human-centric approach that prioritises human values and augments human capabilities.  

A significant observation in the current landscape is the disparity between the rapid, often individual-driven adoption of AI tools and the more measured, strategic integration at an institutional level. Data indicates high personal usage of AI by educators and students, fuelled by the accessibility of generative AI tools like ChatGPT , and a burgeoning market for AI educational products. However, a considerable number of HEIs still lack comprehensive, institution-wide AI strategies, or possess strategies that are not fully aligned with their overarching institutional goals. This suggests a bottom-up adoption dynamic, where the perceived utility and ease of access to AI tools outpace the development of cohesive, top-down institutional frameworks. The inherent complexity of addressing ethical, pedagogical, infrastructural, and academic integrity considerations at an institutional scale contributes to this lag. This can create an environment where AI usage is widespread yet potentially unguided, leading to risks such as inconsistent quality, challenges to academic integrity, and missed opportunities for strategic, institution-wide benefits. UBQ’s vision directly addresses the need to bridge this gap by fostering a holistic and strategically aligned approach to AI.  

This leads to the emerging imperative for HEIs to evolve into ‘AI competent institutions’. The challenges presented by AI are multifaceted, encompassing ethical dilemmas , pedagogical adaptations , infrastructural demands , and overarching strategic integration. Effectively navigating these complexities requires more than just training individual users; it demands systemic institutional capabilities. The concept of an ‘AI competent institution’ thus emerges, signifying a holistic capacity to strategically integrate AI, manage associated risks, foster ethical usage, ensure equitable access, and continuously adapt to the evolving technological landscape. The EU AI Act, with its mandates for training and risk classification for AI systems , further underscores this need for institutional preparedness. The future success, relevance, and societal contribution of HEIs will increasingly depend not only on their research output concerning AI but also on their demonstrated ability to wisely and effectively govern and utilise AI within their own educational missions and operations. This aligns with the Swiss legal direction, which focuses on public authorities and key sectors , suggesting that public HEIs like UBQ will be instrumental in modelling such institutional competence.  

2. Evolving Pedagogical Landscapes: AI as a Catalyst for Educational Innovation

2.1 AI-Driven Pedagogical Advancements

The integration of Artificial Intelligence into educational settings has catalysed a wave of pedagogical advancements, promising to reshape traditional teaching and learning paradigms. As of May 2025, several key AI applications have demonstrated considerable potential. These include highly personalised learning pathways that adapt to individual student progress and learning styles, intelligent tutoring systems (ITS) offering tailored support and feedback, and the development of ‘smart content’ that can be dynamically adjusted to meet learner needs. Furthermore, the advent of sophisticated generative AI tools, including custom Generative Pre-trained Transformers (GPTs) and AI-powered chatbots, is enabling educators to create more bespoke and interactive learning experiences. Students are increasingly leveraging these technologies for tasks such as brainstorming complex ideas, refining their written work, and organising research materials. These advancements collectively aim to transcend the limitations of conventional one-size-fits-all educational models by offering learning experiences that are more adaptive, engaging, and responsive to the diverse needs of individual students, thereby potentially enhancing both learning outcomes and student satisfaction.  

One of the most lauded promises of AI in education is its capacity to deliver highly personalised learning experiences. However, a critical examination reveals a potential ‘personalisation paradox’. While AI algorithms can meticulously tailor content and pace to individual student profiles , there is an emerging concern that this could inadvertently lead to a ‘homogenisation of learning’ and the creation of ‘echo chambers’. AI models are trained on vast datasets, which may inherently reflect dominant cultural perspectives or biases. If personalisation algorithms primarily reinforce existing knowledge pathways or narrow a student’s exposure based on initial interactions and demonstrated preferences, they risk limiting exposure to diverse viewpoints, challenging concepts, or serendipitous discoveries that are crucial for holistic intellectual development. The drive for efficiency in delivering personalised content might, therefore, conflict with the broader educational goal of expanding intellectual horizons and fostering critical engagement with a wide spectrum of ideas. This necessitates that pedagogical strategies actively design for intellectual curiosity and the deliberate introduction of diverse and challenging material within AI-driven personalised learning environments, moving beyond mere content mastery to cultivate a richer, more varied learning journey.  

2.2 The Synergy of Human Educators and AI

A prevalent narrative surrounding AI in education addresses the concern of technological replacement. However, the prevailing and more constructive view, as of 2025, positions AI not as a substitute for human educators but as a powerful collaborator capable of augmenting their capabilities. AI systems can efficiently automate a range of administrative and routine pedagogical tasks, such as grading standardised assessments, generating initial drafts for lesson plans, and tracking student progress analytics. This automation frees valuable educator time, allowing them to focus on more complex and uniquely human aspects of teaching. These include providing nuanced, individualised feedback, facilitating in-depth discussions, mentoring students, fostering critical thinking and creativity, and addressing social-emotional learning needs. In this synergistic model, educators are increasingly envisioned as ‘learning architects’ , skilfully orchestrating learning experiences by leveraging AI tools to enhance their teaching practice while preserving the indispensable human element of guidance, inspiration, and relational pedagogy.  

2.3 Fostering Critical Thinking, Collaboration, and Interdisciplinarity with AI

Beyond content delivery and administrative support, AI holds the potential to foster essential 21st-century competencies such as critical thinking, collaboration, and interdisciplinary understanding. Thoughtfully designed AI-driven learning experiences can prompt students to engage critically with information, analyse complex problems, and evaluate diverse perspectives. Research indicates AI’s utility in developing problem-solving skills through interactive simulations and adaptive challenges. However, this potential is not without its caveats. A significant concern is that an over-reliance on AI tools, particularly generative AI that provides readily available answers, might inadvertently diminish students’ intrinsic motivation to grapple with difficult concepts or develop their own critical thinking faculties. Therefore, it is crucial to cultivate an understanding among students that AI is a ‘learning buddy’ or a cognitive tool to be worked with, rather than a definitive source of truth to be passively accepted. Pedagogical approaches must explicitly teach students how to query AI effectively, critically assess its outputs, identify potential biases, and integrate AI-generated information into their own analytical frameworks. This nuanced engagement is key to ensuring AI serves as a scaffold for higher-order thinking rather than a crutch that inhibits its development.  

The ease with which generative AI can produce sophisticated outputs presents a challenge related to the ‘illusion of competence’. Students might use these tools to complete assignments and generate plausible responses without deeply engaging with the underlying concepts or mastering the requisite skills. This can lead to a superficial understanding that is masked by the polished nature of the AI’s output. The cognitive effort and iterative struggle that are essential for deep learning and skill consolidation can be bypassed if AI is used as a substitute for intellectual engagement. This underscores the critical importance of pedagogical strategies that emphasize metacognition – teaching students how to learn effectively with AI. This includes developing skills in self-assessment, understanding the limitations of AI, critically evaluating the information it provides, and using AI as a tool to explore, question, and deepen their own understanding, rather than as an oracle that provides final answers. This aligns with the necessity for students to cultivate a “self-reflective mindset” regarding their use of and learning with AI.  

The introduction of AI into the educational sphere is also acting as a profound mirror, compelling a re-evaluation of fundamental pedagogical tenets: what knowledge and skills are most valuable, and how should they be assessed in an age where AI can perform many traditional academic tasks? If AI can efficiently summarise texts, conduct basic research, or even generate coherent essays , then traditional assessments focusing solely on these tasks lose some of their validity and purpose. This forces educators and institutions to confront deeper questions about the core objectives of education. What are the uniquely human competencies that HEIs must prioritise and cultivate? These increasingly appear to be higher-order cognitive skills such as sophisticated critical analysis, nuanced ethical judgment, creative problem-solving in novel contexts, and effective interpersonal collaboration – abilities that AI, in its current form, cannot fully replicate. Thus, the ascent of AI is not merely a technological disruption but a catalyst for a significant pedagogical and philosophical realignment. It requires HEIs to refine their value proposition, shifting emphasis towards nurturing these enduring human capacities that will remain critical in an AI-augmented world. This necessitates a thoughtful reconsideration of pedagogical and assessment approaches to ensure they are fit for this new era.  

2.4 Integrating Learning Sciences with AI Development

To ensure that AI educational tools are not only technologically advanced but also pedagogically sound and effective, their development must be deeply informed by the learning sciences. There is a growing body of research dedicated to extending established theories, such as Cognitive Load Theory (CLT) and Mayer’s Cognitive Theory of Multimedia Learning (CTML), to the context of AI-enhanced learning environments. For instance, conceptual models like the Cognitive Load-Aware Modulation (CLAM) strategy are being proposed. CLAM aims to guide the design of personalised, data-driven instructional systems that are dynamically attuned to learners’ real-time cognitive load and emotional states, using multimodal indicators to adapt instruction before cognitive overload impedes learning. Furthermore, studies are investigating the impact of AI tools on student self-efficacy, motivation, and other affective dimensions of learning. This integration of learning sciences ensures that AI applications are designed with a clear understanding of how students learn, how information is processed, and how technology can optimally support these cognitive and affective processes, rather than being driven solely by technological capabilities.  

The following table provides a structured overview of key AI applications in higher education, their pedagogical implications, benefits, challenges, and strategies for effective integration:

Table 1: Key AI Applications in Higher Education and their Pedagogical Implications (May 2025)

AI ApplicationDescriptionKey BenefitsPotential Challenges/Ethical ConcernsPedagogical Strategies for Effective Integration
Personalised Learning Systems (PLS)Platforms that adapt learning paths, content, and pace to individual student needs based on performance data.Tailored learning experiences, improved student engagement, addresses diverse learning needs, self-paced learning.Data privacy, algorithmic bias potentially reinforcing inequalities, risk of creating filter bubbles, cost of implementation. Incorporate diverse content, provide student agency in path selection, regular audits for bias, transparent data usage policies. Emphasise critical evaluation of AI-suggested paths.
Intelligent Tutoring Systems (ITS)AI software that mimics human tutors, providing step-by-step guidance, feedback, and problem-solving support.Immediate, individualised feedback, 24/7 availability, can target specific skill gaps, reduces teacher workload.Can be less nuanced than human tutors, potential for over-reliance, development complexity and cost, ensuring pedagogical soundness.Use as a supplement to human teaching, focus on ITS for foundational skills, train students on effective ITS interaction, integrate with classroom activities for blended learning.
AI-Driven Assessment ToolsTools using AI for automated grading, plagiarism detection, and generating formative/summative assessments.Efficiency in grading, consistent application of rubrics, immediate feedback for students, data for learning analytics.Accuracy limitations in complex assessments, potential for bias in grading algorithms, concerns about academic integrity (AI-generated answers). Use for formative feedback primarily, human oversight for high-stakes assessments, transparent rubrics, educate students on ethical AI use in assessments. Focus on assessing higher-order thinking not easily automated.
Generative AI for Content CreationTools like ChatGPT used for brainstorming, drafting text, summarising information, creating presentations.Supports creativity, idea generation, research assistance, efficiency in content development for students and faculty. Accuracy of generated content (hallucinations), plagiarism and academic integrity, over-reliance diminishing critical skills, ethical use of data. Teach prompt engineering, critical evaluation of AI outputs, ethical guidelines for use, focus on AI as a starting point for human refinement, integrate into assignments that require analysis and synthesis beyond AI capabilities.
Learning Analytics DashboardsSystems that collect, analyse, and visualise student data to inform pedagogical decisions and interventions.Early identification of at-risk students, insights into learning patterns, data-informed teaching strategies, personalised support.Data privacy and security, ethical use of student data, potential for misinterpretation of data, ensuring actionable insights. Train educators in data literacy, ensure transparent data collection and use policies, focus on supportive rather than punitive interventions, empower students with their own data.

Data synthesised from:  

3. The Ethical Compass: Ensuring Responsible and Equitable AI in Educational Ecosystems

3.1 Critical Ethical Challenges in AIED

The integration of Artificial Intelligence into Education (AIED) brings forth a spectrum of critical ethical challenges that demand careful consideration and proactive management. Foremost among these are concerns related to algorithmic bias, discrimination, and fairness. AI systems, trained on historical data, can inadvertently perpetuate or even amplify existing societal biases, potentially leading to inequitable outcomes for students from marginalised communities. Data privacy and the potential for pervasive surveillance are also significant issues, as AIED tools often require access to extensive student data to function effectively. Ensuring transparency and explainability (XAI) in AI decision-making processes is crucial for building trust and allowing for scrutiny and redress when errors or biases occur. Accountability for AI-driven actions and their consequences remains a complex area, particularly when harm arises. Above all, there is an overarching imperative to preserve human agency, dignity, and autonomy within educational ecosystems increasingly mediated by AI. The United Nations Secretary-General has specifically highlighted safety, equality, and accountability as critical areas requiring global attention in the context of AI governance. Addressing these ethical dimensions proactively is not merely a matter of compliance but a fundamental prerequisite for fostering an environment where AI genuinely benefits all learners and upholds the core values of education.  

A significant hurdle in navigating the ethical landscape of AI is the “pacing problem,” where the rapid evolution of AI technology consistently outstrips the development and implementation of comprehensive regulatory and ethical frameworks. While legislative initiatives like the EU AI Act and Switzerland’s move to adopt the Council of Europe’s AI Convention represent important strides, their full implementation, interpretation, and adaptation to novel AI capabilities, such as increasingly sophisticated generative AI, will inevitably take time. AI development, particularly in areas like large language models, advances at an exponential rate , driven by intense research and commercial competition. In contrast, legislative and regulatory processes are, by their nature, more deliberative and slower to adapt. This inherent mismatch in speed means that HEIs like UBQ cannot afford to passively await definitive, all-encompassing legislation. Instead, they must cultivate agile, internal ethical review processes and establish robust “AI guardrails”. These internal mechanisms should be designed to adapt to rapid technological shifts, operating within the bounds of existing legal frameworks while proactively anticipating future regulatory directions. Such a proactive and adaptive stance is essential for fostering responsible innovation and mitigating ethical risks in a fast-moving technological environment.  

3.2 The Swiss and European Regulatory Landscape (as of May 2025)

As of May 2025, the regulatory environment for AI in Switzerland and Europe is actively evolving. Switzerland currently does not possess specific, overarching AI legislation. Instead, it applies its existing technology-neutral laws to AI-related issues, covering areas such as data protection, liability, and intellectual property. A significant development occurred in March 2025 when Switzerland signed the Council of Europe’s Convention on Artificial Intelligence, Human Rights, Democracy and the Rule of Law. The Swiss Federal Council intends to incorporate this convention into national law, with a primary focus on regulating AI use by public authorities and in high-risk applications, rather than imposing extensive obligations on the private sector for all AI systems. Legislative changes are anticipated to be predominantly sector-specific, with horizontal regulations for key areas impacting fundamental rights, such as data protection. A draft bill for consultation is expected by the end of 2026.  

Concurrently, the European Union’s AI Act, which was passed in March 2024, establishes a comprehensive, risk-based regulatory framework for AI across member states and entities interacting with the EU market. This landmark legislation categorises AI systems based on their potential risk to health, safety, and fundamental rights, imposing stricter requirements for high-risk applications. Key provisions include mandatory conformity assessments, data governance standards, transparency obligations, and human oversight requirements. Notably for the education sector, the EU AI Act mandates specific training for individuals working with AI systems from February 2025, and rules for classifying and managing high-risk AI systems will come into effect from August 2026. This evolving legal landscape provides an essential operational framework for HEIs like UBQ, guiding the ethical development, deployment, and governance of AI technologies in research and education.  

The discourse around AI ethics is maturing, moving beyond a reactive identification of risks towards a proactive embedding of ethical considerations throughout the entire AI lifecycle. This evolution is reflected in advanced international guidelines and emerging institutional best practices, which advocate for “Ethical AI by Design” or “Responsible AI” as a foundational culture, rather than merely an ethics checklist. Early discussions often centred on mitigating harms like bias and privacy violations after they occurred. However, the limitations of such a post-hoc approach are evident. Newer frameworks, including the EU AI Act and the Council of Europe Convention, as well as UNESCO guidelines, place a strong emphasis on proactive measures such as risk assessment, impact analysis, transparency, “meaningful human control,” and continuous monitoring and oversight. For an institution like UBQ, this paradigm shift implies a deeper commitment than simply establishing an ethics review board. It necessitates the integration of ethical reasoning into the curriculum for students developing AI, comprehensive training for researchers on conducting ethical impact assessments for their AI projects, and the creation of institutional structures that promote ongoing reflection, adaptation, and responsible governance of AI systems. The goal is to cultivate an ingrained ethical culture that permeates all AI-related activities, thereby maintaining public trust and ensuring that AI development and deployment consistently serve human values and societal good.  

3.3 UBQ’s Framework for Ethical AI Deployment and Research

In response to these ethical imperatives and the evolving regulatory landscape, the University of Bareda Queen is committed to developing and implementing a robust institutional framework for the ethical deployment and research of Artificial Intelligence. This framework will be grounded in core principles such as human-centricity, fairness, transparency, accountability, safety, and the preservation of human dignity, aligning closely with international guidelines promulgated by organisations like UNESCO and the OECD , as well as the foundational tenets of the Swiss AI Guidelines. These guidelines stress putting people first, ensuring transparency in AI operations, defining clear lines of accountability, and prioritising the safety and security of AI systems. UBQ’s framework will translate these broad principles into concrete operational policies, procedures, and educational initiatives. This includes establishing clear ethical review processes for AI research projects, developing guidelines for the responsible use of AI tools in teaching and learning, promoting AI ethics literacy among students and staff, and investing in research on ethical AI, including explainability and bias mitigation techniques. Our aim is to foster a culture of responsible innovation where ethical considerations are integral to every stage of AI development and application within the university.  

The drive towards AI-driven personalisation in education, while promising significant benefits, inherently creates a tension with fundamental rights to privacy and individual autonomy. Effective personalisation relies on the collection and analysis of vast quantities of granular student data, encompassing academic performance, learning behaviours, engagement patterns, and potentially even affective states. The very data that fuels the personalisation engine also gives rise to significant privacy concerns regarding its collection, storage, security, and use. Furthermore, if AI systems become overly prescriptive in dictating learning paths or making critical educational decisions without sufficient transparency or student input, this can undermine learner autonomy and agency. There is a delicate balance to be struck between leveraging data to enhance learning and safeguarding individual rights. UBQ is therefore committed to championing research into and the implementation of privacy-preserving AI techniques, such as federated learning or differential privacy, within its educational technologies. This commitment extends to establishing transparent data governance policies that clearly articulate how student data is used, ensuring robust security measures, and designing systems that empower students with meaningful control over their personal data and learning choices. Clear communication regarding data practices, as emphasised by regulatory bodies like the Swiss Federal Data Protection and Information Commissioner (FDPIC) , will be a cornerstone of UBQ’s approach.  

3.4 Ensuring Equitable Access and Mitigating the Digital Divide

A critical ethical consideration in the deployment of AI in education is the imperative to ensure equitable access and prevent the exacerbation of existing digital divides and societal inequalities. There are legitimate concerns that the benefits of AI-driven personalised education and advanced AI tools may disproportionately accrue to well-resourced institutions and students, potentially widening the gap between privileged and underprivileged learners. UNESCO has pointedly cautioned that financial investments in AI for education must be additional to, and not divert resources from, fundamental educational needs, particularly in regions where basic infrastructure such as electricity and internet connectivity in schools remains inadequate. UBQ recognises that the transformative potential of AI can only be fully realised if its benefits are accessible to all. This commitment involves advocating for policies that promote digital inclusion, supporting initiatives aimed at bridging the digital divide both locally and globally, and ensuring that our own AI-driven educational offerings are designed with accessibility and inclusivity at their core. This includes considering the diverse needs of learners, providing necessary support and training for engaging with AI tools, and exploring cost-effective solutions that can be more broadly adopted.  

The following table offers a comparative overview of key ethical frameworks and regulatory approaches relevant to AI in education:

Table 2: Comparative Overview of Ethical Frameworks and Regulatory Approaches for AI in Education (May 2025)

Framework/RegulationKey Principles EmphasisedImplications for HEIsUBQ’s Alignment/Response
UNESCO Recommendation on the Ethics of AI (2021) & Guidance for Generative AI in Education (2023)Human-centred values, fairness, non-discrimination, safety, security, transparency, explainability, privacy, data protection, human oversight, sustainability, awareness and literacy. Adopt ethical principles in AI development and deployment, ensure teacher and student AI literacy, promote inclusive and equitable use, protect student data, integrate ethics into curricula. Suggests age limit of 13 for AI use in classroom. UBQ aligns with UNESCO’s human-centric principles, integrating AI ethics into research protocols and curricula. Committed to enhancing AI literacy for all stakeholders and ensuring responsible data governance. Will consider age-appropriate guidelines.
OECD Principles on AI & Education Policy OutlookInclusive growth, sustainable development, human-centred values, transparency, explainability, robustness, security, safety, accountability. Foster trustworthy AI systems, prepare students for AI-driven jobs, address ethical implications, promote international cooperation, re-evaluate competencies needed. UBQ’s research and educational programmes aim to develop trustworthy AI and equip students with future-ready skills. Engages in interdisciplinary research on AI ethics and collaborates internationally. Actively reviewing curriculum for AI-era competencies.
EU AI Act (passed March 2024)Risk-based approach (unacceptable, high, limited, minimal risk), transparency, data quality, human oversight for high-risk systems, conformity assessments.Classify AI systems by risk, comply with specific requirements for high-risk AIED tools (e.g., in admissions, assessment), ensure data governance, provide training for staff using AI (from Feb 2025). UBQ is developing internal processes to assess AI systems according to risk levels defined by the EU AI Act. Implementing mandatory AI training for relevant staff and ensuring compliance with data and transparency obligations for tools deployed.
Swiss AI Guidelines (2020) & CoE AI Convention AdoptionPeople first, optimal regulatory conditions for innovation, transparency, traceability, explainability, accountability, safety, security, active shaping of AI governance, stakeholder involvement. Focus on public authorities and high-risk sectors, sector-specific legislation. HEIs to ensure AI use aligns with fundamental rights, data protection. Non-binding measures to supplement. UBQ adheres to the principles of the Swiss AI Guidelines, prioritising human well-being and responsible innovation. Actively participating in consultations and preparing for the integration of the CoE AI Convention into Swiss law, particularly concerning data protection and transparency in AI applications. Will contribute to development of non-binding best practices.

Data synthesised from:  

4. Cultivating Future-Ready Graduates: AI Literacy and Competency Development at UBQ

4.1 Redefining Essential Skills and Literacies for an AI-Augmented World

The proliferation of Artificial Intelligence across all societal domains necessitates a fundamental redefinition of the essential skills and literacies required for graduates to thrive in an AI-augmented world. Beyond proficiency in specific AI technologies, there is a growing consensus on the critical importance of cultivating uniquely human competencies. These include advanced critical thinking, creativity and innovation, effective collaboration and communication, robust ethical reasoning, and a high degree of adaptability. Educational paradigms are shifting to emphasize not just learning about AI (understanding its principles and technologies) and learning to do AI (developing AI systems), but also learning with AI – using AI tools effectively and ethically as cognitive partners. Workforce preparation is a significant concern, with reports indicating that while a majority of students anticipate using generative AI in their future careers, many feel underprepared in terms of specific AI technology skills and understanding. This highlights a crucial gap that HEIs must address by ensuring their curricula foster both the technical competencies and the broader cognitive and ethical capacities essential for navigating the complexities of an AI-driven future.  

An important consideration in developing AI literacy is the need to foster a “dual literacy” – an ability not only to use AI tools effectively but also to critically evaluate these systems, their outputs, and their broader societal implications. Many educational programmes may focus on the operational aspects of AI tools, teaching students the mechanics of interaction. However, the uncritical application or acceptance of AI-generated information can lead to the propagation of misinformation, the reinforcement of biases embedded in AI models, and a superficial understanding of complex issues. Therefore, comprehensive AI literacy must extend beyond mere technical proficiency to encompass a deep understanding of AI’s limitations, an awareness of potential ethical pitfalls, and the development of critical discernment. The power and pervasiveness of AI systems necessitate this more sophisticated form of literacy, one that combines technical skill with a robust critical and ethical awareness, to prevent misuse, mitigate harm, and foster responsible innovation. UBQ’s curriculum development efforts will explicitly aim to cultivate this dual literacy, integrating modules on AI ethics, critical data analysis, and bias detection within AI-specific courses, and weaving these critical perspectives into AI-related content across all academic disciplines. This approach moves decisively beyond simple “tool training” to empower graduates as discerning and responsible users and developers of AI.  

4.2 Curriculum Innovation at UBQ: Integrating AI Across Disciplines and Specializations

The University of Bareda Queen is actively engaged in curriculum innovation to ensure our graduates are well-prepared for this AI-augmented future. UBQ already offers specialised programmes at the intersection of AI and various disciplines, such as the Master of Science in Artificial Intelligence in Medicine and the Certificate of Advanced Studies in Artificial Intelligence for Creative Practices , alongside a comprehensive Master’s programme in Computer Science. These programmes provide deep technical expertise and domain-specific AI application knowledge. However, our vision for AI literacy extends beyond these specialisations. We advocate for the integration of AI concepts, ethical considerations, and practical applications across a wide array of disciplines, from the humanities and social sciences to law, business, and the natural sciences. This approach aligns with international efforts, such as UNESCO’s development of AI competency frameworks for both students and teachers , which emphasize foundational AI understanding for all. By embedding AI literacy across the curriculum, UBQ aims to cultivate T-shaped professionals: graduates who possess deep expertise in their chosen field, complemented by a broad understanding of AI’s capabilities, limitations, and societal impact, enabling them to effectively leverage AI in diverse professional contexts. This includes fostering an understanding of how AI is transforming research methodologies, professional practices, and creative expression within each discipline.  

The rapid and continuous evolution of AI technologies means that the specific skills and tools that are cutting-edge today may become outdated within a few years. While foundational AI skills are undeniably important , the most crucial “future-ready” competency in the age of AI is adaptability and the capacity for lifelong learning. Focusing educational efforts solely on current AI tools or platforms risks providing graduates with a temporary advantage that quickly erodes. Therefore, HEIs must undertake a fundamental shift in their educational philosophy: from a model primarily focused on “training for specific AI jobs or tools” to one dedicated to “educating for an AI-infused future.” This implies cultivating a deep, principled understanding of AI’s core concepts, fostering robust critical thinking and problem-solving abilities that can be applied in novel and evolving contexts, and instilling a mindset that embraces continuous learning, unlearning, and relearning. UBQ’s educational approach will increasingly emphasize “learning how to learn” in an era defined by rapid technological change, equipping graduates with the intellectual agility and metacognitive skills necessary to adapt and thrive throughout their careers.  

4.3 Teacher Training and Continuous Professional Development (CPD)

The successful integration of AI into the educational fabric hinges critically on the preparedness of educators. Faculty members are at the forefront of implementing AI tools and AI-informed pedagogies, and their ability to do so effectively and ethically is paramount. International surveys have revealed that, as of 2022, only a small fraction of countries had established AI frameworks or training programmes specifically for teachers. Furthermore, a significant proportion of educators, even within technology-focused disciplines like Computer Science, report feeling inadequately equipped to teach AI concepts or integrate AI tools into their practice. Recognizing this critical need, regulatory frameworks such as the EU AI Act are now mandating training for individuals who work professionally with AI systems, with such requirements taking effect from February 2025. UBQ is committed to providing comprehensive and ongoing Continuous Professional Development (CPD) for its faculty. These programmes will cover not only the technical aspects of using various AI tools but also pedagogical strategies for effectively integrating AI into diverse learning environments, ethical considerations in AIED, methods for fostering critical student engagement with AI, and techniques for assessing learning in AI-augmented contexts.  

The role of the teacher is evolving significantly in the AI era, necessitating CPD that goes beyond mere technical proficiency with AI tools. Educators must be equipped to become “learning architects” , capable of strategically designing and orchestrating learning experiences that leverage AI’s strengths while mitigating its weaknesses. They must also serve as ethical guides, facilitating critical discussions about AI’s societal impact, potential biases, and responsible use. Teachers are central to mediating students’ interactions with AI and shaping their understanding of its capabilities and limitations. If educators themselves lack a deep understanding of AI, its pedagogical implications, or its ethical dimensions, they cannot effectively guide their students. Therefore, providing superficial technical training on AI tools is insufficient. UBQ’s CPD initiatives will be comprehensive, aiming to empower faculty with the knowledge and skills to critically evaluate AI tools for pedagogical suitability, design innovative AI-integrated learning activities, foster critical AI literacy in students, and navigate the ethical complexities of AI in education. This redefines the teacher’s role from primarily a content deliverer to that of a facilitator of learning, a critical thinking coach, and an ethical mentor in an AI-enabled classroom.  

4.4 Aligning with National and International Competency Frameworks

To ensure the relevance and international recognition of its AI education initiatives, UBQ will actively align its curriculum development and competency frameworks with established and emerging national and international standards. Prominent organisations such as UNESCO and the OECD are at the forefront of developing and promoting AI competency frameworks for both students and educators. These frameworks typically outline essential knowledge, skills, and attitudes related to understanding AI, applying it effectively, and engaging with it ethically and responsibly. Furthermore, influential reports like the EDUCAUSE 2025 Students and Technology Report provide valuable insights into student perspectives on AI, workforce preparation needs, and the importance of AI-related skills for future careers. By benchmarking its efforts against these international standards and research findings, UBQ aims to ensure that its graduates are equipped with AI competencies that are globally relevant, comparable across different educational systems, and aligned with the evolving demands of a globalised, AI-driven job market. This alignment also facilitates international collaboration, student and faculty mobility, and the mutual recognition of qualifications in the rapidly evolving field of AI.  

5. Advancing the Frontier: UBQ’s Strategic Research Directions in AI and Education

5.1 Key Research Thrusts for UBQ in AIED

The University of Bareda Queen is poised to make significant contributions to the advancement of knowledge in Artificial Intelligence in Education (AIED) through a focused and forward-looking research agenda. Building on our existing strengths and cognisant of global research trends and societal needs, UBQ will prioritise several key research thrusts:

  • Explainable AI (XAI) in Learning Analytics: A critical challenge in deploying AIED systems is the “black box” nature of many algorithms. Research in XAI aims to make the decision-making processes of AI models transparent and understandable to educators and learners. UBQ will investigate and develop XAI techniques (such as LIME, SHAP, and Captum, as reviewed in recent literature ) specifically tailored for educational contexts. The goal is to enhance trust in AI tools, enable better interpretation of learning analytics, and provide actionable insights for pedagogical interventions.  
  • Cognitive and Affective Impact of AI Tools: Understanding how AI tools influence student learning processes, cognitive load, motivation, engagement, and emotional states is paramount. UBQ research will explore these interactions, potentially leveraging frameworks like Cognitive Load Theory (CLT) and the Cognitive Theory of Multimedia Learning (CTML), and investigating models such as CLAM. This research will inform the design of AIED tools that are not only effective but also supportive of students’ holistic well-being.  
  • Ethical AI Frameworks and Governance for Education: While general AI ethics principles exist, their application in the unique context of education requires specific frameworks and governance models. UBQ will contribute to developing and validating robust ethical guidelines for AIED, addressing issues of fairness, algorithmic bias, data privacy, student consent, and accountability. This includes research into privacy-preserving machine learning techniques suitable for educational data.  
  • Innovative Pedagogical Models Leveraging AI: The availability of new AI capabilities necessitates the design and rigorous testing of innovative teaching and learning strategies. UBQ research will focus on developing pedagogical models that effectively integrate AI to enhance student engagement, foster critical thinking, promote collaborative problem-solving, and cultivate creativity. This includes exploring the role of generative AI in project-based learning and inquiry-driven education.  
  • AI for Accessibility and Inclusive Education: AI holds considerable promise for supporting learners with diverse needs and for bridging educational gaps. UBQ will conduct research into how AI can be used to create more accessible learning materials, provide personalised support for students with disabilities, and develop tools that cater to a wider range of learning preferences and backgrounds, thereby promoting more inclusive educational environments.  

A significant portion of current AIED research tends to focus on the immediate efficacy of specific tools or interventions, often measured by short-term outcomes such as improved test scores or faster task completion. While valuable, such studies, many of which are quasi-experimental or of limited duration , may not fully capture the long-term cognitive, affective, and career impacts of sustained AI integration in education. There remains a discernible gap in our understanding of how prolonged interaction with AI-rich learning environments shapes the development of critical thinking, creativity, problem-solving abilities, ethical reasoning, and ultimately, career trajectories and life chances. The rapid evolution of AI and the focus on immediate utility can overshadow the need for patient, longitudinal research into these deeper, more enduring effects. Consequently, a strategic direction for UBQ will be to champion and invest in longitudinal research projects. These studies will aim to track cohorts of students throughout their AI-enriched educational journeys and into their subsequent careers, providing invaluable, nuanced insights into the true transformative potential – and potential pitfalls – of AI in education over extended periods.  

5.2 Fostering Interdisciplinary Collaboration within UBQ

The multifaceted nature of AI and its profound impact on education demand a deeply interdisciplinary approach to research. Effective AIED solutions cannot emerge solely from computer science or education faculties in isolation. They require a synergistic collaboration between technologists who design the algorithms, educators who understand pedagogical principles and classroom realities, ethicists who can navigate the moral complexities, psychologists who can shed light on cognitive and affective processes, domain experts from various disciplines who can ensure content relevance, and legal scholars who can address governance and regulatory issues. The University of Bareda Queen, with its diverse faculties and established strengths in areas such as Computer Science, Medicine (including the AI in Medicine programme ), Humanities, Law, and Social Sciences , is exceptionally well-positioned to foster such interdisciplinary collaborations. UBQ will actively promote the creation of interfaculty research groups, joint projects, and shared platforms to encourage the cross-pollination of ideas and the development of holistic, impactful AIED innovations.  

The prevailing research focus in AIED often centres on the application of specific AI tools within existing educational structures and paradigms. While this yields important incremental advancements, a more profound and potentially transformative research agenda would explore how AI is fundamentally reshaping the entire educational ecosystem. This includes investigating AI’s impact on institutional structures, the very nature and epistemology of knowledge in an age of AI-generated content, the evolving roles and identities of academics and students, and the university’s broader societal mission and responsibilities. The potential of AI is not merely to enhance current practices but to be truly transformative. This necessitates a shift in research perspective: from primarily studying AI in education to critically examining education itself as it is being remade by and in response to AI. UBQ’s strategic research initiatives should therefore encompass philosophical, sociological, and critical inquiries into the future of the university, the credibility and authority of AI-generated knowledge, the ethical dimensions of AI’s influence on academic inquiry , and the implications for academic freedom and intellectual discourse. This involves engaging deeply with ongoing debates about the “future of the university” and addressing complex questions of epistemic justice in an AI-pervaded world.  

5.3 Contribution to Addressing Societal Challenges through AIED Research

The research conducted at UBQ in the field of AIED will be driven not only by academic curiosity but also by a strong commitment to addressing pressing societal challenges. AIED research has the potential to contribute significantly to broader societal goals, including fostering lifelong learning opportunities crucial for an adaptable workforce, enhancing professional development pathways, reducing educational inequalities by providing access to quality learning resources, and promoting the ethical and responsible development and deployment of technology globally. These aims are closely aligned with international development agendas, such as the United Nations’ Sustainable Development Goals (SDGs), particularly SDG 4, which focuses on ensuring inclusive and equitable quality education and promoting lifelong learning opportunities for all. By orienting its AIED research towards these larger societal needs, UBQ underscores the public service mission inherent in its role as a leading higher education institution, aiming to leverage technological innovation for the betterment of society.  

While the synergy between human educators and AI is frequently cited as a desirable outcome , the actual dynamics of this collaboration – how to optimise it, what factors contribute to its effectiveness, and how to adequately prepare both humans and AI systems for productive partnership – represent a rich and critical area for dedicated research. Effective collaboration is not an automatic consequence of placing AI tools in the hands of educators; it requires a nuanced understanding of how humans and AI can best complement each other’s distinct strengths and mitigate their respective weaknesses. This involves in-depth study of human factors in AI interaction, the design of intuitive and supportive AI interfaces, the conditions that foster trust (and appropriate scepticism) in AI systems, and the development of pedagogical strategies that explicitly leverage the collaborative potential of human-AI teams in educational settings. The ultimate success of the “AI as a collaborator” model hinges on a deep, empirically grounded understanding of human-computer interaction within the complex, dynamic context of education. UBQ can therefore spearhead research into “Human-AI Teaming” in education. This research would aim to develop and validate models and best practices for how educators and AI systems can work together most effectively to achieve diverse learning outcomes, including exploring how AI itself can be designed to be a more effective, transparent, and pedagogically aware “team player.”  

6. Conclusion: Towards a Symbiotic Future – Human Ingenuity and Artificial Intelligence in Education

6.1 Recapitulation of Core Arguments and UBQ’s Perspective

The ascent of Artificial Intelligence by May 2025 presents both unprecedented opportunities and profound challenges for higher education. This paper has articulated the University of Bareda Queen’s perspective on this transformative landscape, underscoring the pervasive influence of AI and the critical role of HEIs in navigating its complexities. We have highlighted the potential for AI to catalyse pedagogical innovation, from personalised learning pathways to intelligent tutoring systems, while emphasizing the indispensable synergy between human educators and AI tools. A central tenet of UBQ’s approach is the unwavering commitment to an ethical compass, ensuring that AI deployment is responsible, equitable, and aligned with human values, guided by evolving regulatory frameworks in Switzerland and Europe. Furthermore, we have underscored the imperative to cultivate future-ready graduates equipped with robust AI literacy and the critical competencies necessary for an AI-augmented world. UBQ’s strategic research directions are aimed at advancing the frontier of AIED, fostering interdisciplinary collaboration, and contributing to the resolution of broader societal challenges. Our vision is steadfastly human-centric, focused on leveraging AI to augment human capabilities and enrich the educational experience.

6.2 Reaffirming Commitment to a Balanced, Ethical, and Human-Centric Approach

The University of Bareda Queen reaffirms its profound commitment to a balanced, ethical, and unequivocally human-centric approach to the integration of Artificial Intelligence in education. We firmly believe that technology, however advanced, must remain a servant to human values and overarching educational goals, never the master. The focus of our endeavours – in teaching, research, and institutional practice – will consistently be on the augmentation, not the replacement, of human intellect, creativity, critical inquiry, and meaningful interpersonal interaction. We are dedicated to fostering an environment where AI empowers educators and learners, enhances understanding, and promotes equitable access, all while safeguarding individual autonomy, privacy, and dignity. This commitment requires continuous vigilance, critical reflection, and an adaptive approach to governance and practice as AI technologies continue to evolve.  

Despite the remarkable capabilities of AI to process information, generate content, and automate tasks , the core functions and enduring value of the university in an AI age not only persist but become even more critical. While AI can handle vast amounts of data and information, it is the university that cultivates wisdom, critical discernment, ethical reasoning, and the holistic development of the individual. The very challenges posed by AI – its potential for misuse, the ethical dilemmas it surfaces, and the need for critical evaluation of its outputs – underscore the heightened importance of human judgment, ethical leadership, and deep, contextual understanding. These are precisely the capacities that HEIs are uniquely positioned to nurture. Thus, the role of the university evolves: from being perceived perhaps primarily as a repository and transmitter of knowledge, to being an even more crucial crucible for developing the sophisticated human capabilities required to navigate an AI-driven world responsibly, thoughtfully, and creatively. UBQ’s vision for the future is one where its relevance is enhanced, not diminished, by AI, as it redoubles its commitment to these foundational humanistic and intellectual pursuits.  

6.3 A Forward-Looking Statement

The journey of integrating Artificial Intelligence into higher education is an ongoing odyssey, not a final destination. The landscape of AI in education is dynamic and rapidly evolving, and the frameworks, understandings, and strategies articulated in May 2025 must be viewed as part of a continuous process of learning, adaptation, and societal dialogue. AI technology will undoubtedly continue its rapid advancement beyond this point , and our ethical, pedagogical, and societal understanding of its implications will co-evolve. Fixed solutions or rigid policies will quickly become anachronistic in such a fluid environment. The dynamic nature of AI necessitates an unwavering institutional commitment to ongoing re-evaluation, iterative refinement of practices, and adaptive governance.  

The University of Bareda Queen embraces this future with a spirit of informed optimism, tempered by a realistic appreciation of the challenges ahead. We envision a symbiotic future where human ingenuity and artificial intelligence collaborate to create educational experiences that are more effective, equitable, engaging, and ultimately, more humanising. UBQ is dedicated to playing a leading role in shaping this future, through pioneering research, innovative pedagogy, ethical leadership, and a steadfast commitment to serving society. We commit not to providing definitive answers for all time, but to remaining at the forefront of the evolving dialogue, fostering continuous inquiry, and adapting our approaches as new knowledge, new technologies, and new societal needs emerge. This agile, learning-oriented posture is, we believe, essential for navigating the algorithmic ascent and charting a truly human-centric trajectory for AI in higher education.

Leave a Reply

Your email address will not be published. Required fields are marked *