JVR Africa Group Terms of Business

Search bySearch By Algolia logo

Artificial Intelligence and Generative AI Models Policy Statement

Please provide your details in order to download this page as a PDF.

Would you like to hear from us?

If selected, one of our experts will be in touch with you.

We're committed to your privacy. JVR Africa Group uses the information you provide to us to contact you about relevant content, products and services. You may unsubscribe from these communications at any time. For more information, see our Terms of Business. All fields are required.

Last updated: 14 October 2024

Download as PDF

1.      Introduction

 Although Artificial Intelligence (AI) is not a new concept, the use of AI and generative AI models (as a Large Language Model) have received significant attention recently.

AI, ChatGPT, Copilot, and many other embedded generative AI model features in applications are regarded as valuable for providing exciting new prospects of enhanced speed, efficiency, automation, and productivity in a highly challenging and complex world- and work environment.

JVR recognises the importance of AI and generative AI in enhancing productivity, improving efficiency, decision-making, and the delivering of innovative services.

We also acknowledge the potential risks associated with their use, including the potential for biased algorithms, loss of privacy, and the ethical concerns surrounding their implementation.

 

2.      Definitions

 For this policy statement:

‘Artificial Intelligence (AI)’: refers to any computer system that can perform tasks that would typically require human intelligence.

‘ChatGPT’: is a generative artificial intelligence chatbot developed by OpenAI to generate human-like responses to text-based queries. It was launched in 2022 based on the GPT-3.5 large language model (LLM), it was later updated to use the GPT-4 architecture.

‘Large language model (LLM): refers to a computational model capable of language generation or other natural language processing tasks. As language models, LLMs acquire these abilities by learning statistical relationships from vast amounts of text during a self-supervised and semi-supervised training process.

‘Copilot’: is the generative artificial intelligence chatbot developed by Microsoft. Based on the GPT-4 series of large language models, it was launched in 2023 as Microsoft's primary replacement for the discontinued Cortana.

‘Generative Artificial Intelligence (generative AI)’: is artificial intelligence capable of generating text, images, videos, or other data using generative models, often in response to prompts. Generative AI models learn the patterns and structure of their input training data and then generate new data that has similar characteristics.

‘Workplace’: refers to any physical or virtual location where employees carry out their work.

 

3.    The Objective of this JVR Policy Statement

 The objective of this policy statement is to formulate the JVR Group of companies’ position on the use of AI and large language models in the products and services we provide to clients.

 

4.    Measures and Protocols

 JVR subscribes to the following measures and protocols to ensure the responsible use of AI and generative AI models in the workplace:

a.      Ethical considerations: We will always ensure that our use of AI and generative AI models aligns with our organisational values, principles of fairness, transparency, and accountability. We will ensure that we take appropriate measures to prevent the perpetuation of harmful biases or discrimination in terms of ethical, social, and legal implications.

b.      Transparency: We will be transparent about how we use AI and generative AI models in the workplace. Employees will be informed about the role and purpose of AI and generative AI models, how their data is used and protected, and how they can address any concerns.

c.      Data privacy: We will protect the privacy and confidentiality of employee data by ensuring that the data collected is only used for its intended purpose and is safeguarded in line with relevant data protection laws.

d.      Responsible use: We will use AI and generative AI models responsibly and ensure that their use do not negatively impact employees' welfare, dignity, or job security.

e.      Employee training: We will provide training to our employees on how to use AI and generative AI models in the workplace, their benefits and limitations, and how to recognise and report any concerns or ethical issues.

f.       Accountability: We will ensure that the implementation of AI and generative AI models in the workplace is overseen by a designated team responsible for monitoring and evaluating the ethical and practical implications of their use.

g.      Review and update: We will regularly review and update our policies and practices surrounding the use of AI and generative AI models in the workplace to ensure they align with best practices and evolving ethical considerations. AI and generative AI model applications must be designed and tested for fairness, accuracy, and reliability. Any potential negative impacts of AI and generative AI models on employees or clients must be identified and addressed.

h.      Training: Employees who are responsible for developing, implementing, or using AI and generative AI model applications must receive training on the responsible use of AI and generative AI models.

 

5.    Policy Statement

 No modern business can afford to ignore the benefits provided by technology. It is, however, also true that embarking on this journey requires an informed and clear understanding of the benefits and challenges of AI and large language models in the products and services provided by JVR to clients.  In order to apply the above-mentioned measures and protocols, two specific areas of concern are confirmed:

5.1       The Use of AI in Rendering Psychological, HR, and Talent Products and Services

 It is important to note that AI can enhance the client experience on technology systems, but it cannot replace the scientific research, the quantitative analysis and accuracy of scoring, norming, and validation of assessments.

Using AI and large language models to interpret assessment results are possible after extensive periods of “teaching and checking” the system output. There is, however, a significant risk of receiving generalised results, the system having facts wrong, incorporating bias, and even “hallucinating”.

Understanding these risks, JVR deals with AI and large language models in the JVR electronic platforms and the work we do, as follows:

a.      All AI and generative AI model applications considered by JVR undergo a thorough risk assessment process before considering implementation. This is to identify ethical, legal, research, and human implications that could impact on the scientific validity and accuracy of our products and services.

b.      Qualitative interpretation of assessment results compiled by AI and large language models must be able to provide evidence of its scientific truth, accuracy, output, and advice, before implementation.

5.2            The Use of AI and its Impact on Governance, Risk Management, and Compliance

 The expectations of GDPR and POPIA regarding the safety of personal information aligns with the ethical principles of psychological best practice as is formulated in the Health Professions Act. Given the nature of work done by JVR, we are committed to this legislation and give ongoing attention to ensuring a safe, secure, and compliant environment. 

It is important to note that it cannot be guaranteed that the use of AI and large language models available on the internet, are secure. Where JVR invests in AI and large language models, we take particular care to:

a.      Store and process all our information in a safe and secure hosting environment.

b.      We continuously review and update our AI and generative AI policies and protocols to ensure they remain relevant and effective.

c.      Keep our employees informed and educated regarding AI and generative AI model protocols.

5.3            The Use of AI in Day-to-Day Productive Work

 Employees are encouraged to make use of AI in their day-to-day activities but remain personally accountable for all their work-related outputs, regardless of the methods for attaining/delivering those outputs/deliverables.

Special care is taken in working with confidential company data and AI applications. Data breaches are considered a serious offense against which JVR always protects its resources, information, and employees.

Where employees create content of any kind, references are to be used as per APA guidelines where appropriate. LLM's and other applications are excellent research tools, but do not allow employees to bypass appropriate scientific referencing where it is required.

 

6.   In Conclusion

 JVR is excited about the opportunities provided by AI and the large language models to provide products and services to our clients that are fast, efficient, user-friendly, and comprehensive. Speed can however not replace the critical importance of accuracy of the data, the scientific evidence, and the usefulness and truth of information to the client. Speed and efficiency can also not replace the risks of cybercrime, loss of personal information, and data breaches. For this reason, this policy statement seeks to provide clarity by JVR about the responsible ways we will be dealing with the opportunities and risks provided by AI and large language models.

This site uses cookies to enhance your experience and to provide us with information on how to improve our website. To find out more, see our Terms of Business.