The purpose of this policy is to define guidelines for the appropriate use of Artificial Intelligence (AI) tools at FxScouts. It aims to encourage the efficient and secure utilization of AI, including generative AI programs and ChatGPT, while mitigating associated risks.
This policy applies to everyone at FxScouts and governs our use of all AI tools provided by the company, including but not limited to generative AI programs and ChatGPT.
Everyone authorized by FxScouts must complete the required training to be permitted to use AI tools. This process ensures that we are fully aware of the tools' capabilities and limitations and use them effectively and responsibly.
Unauthorized use includes, but is not limited to, usage by non-authorized personnel, utilization beyond one's scope of work, or employing the tools in manners inconsistent with this policy or other company guidelines. Any suspected unauthorized use should be reported to the relevant supervisory personnel or IT security team promptly. Unauthorized usage may lead to disciplinary action, up to and including termination of employment.
Preservation of company data security, intellectual property, and confidentiality is paramount in all activities, including the use of AI tools. As these tools learn and generate content based on the input data, it is crucial that users avoid inputting or sharing sensitive information, such as customer data, confidential contracts, details about partnerships, projects, work statements, or any other proprietary information.
Additionally, we must respect legal and ethical boundaries regarding data privacy. If we are unsure whether specific information is appropriate to use with the AI tool, we should consult our legal department. Violations of data security and confidentiality guidelines may lead to disciplinary action.
AI tools are not stand-alone solutions but are part of a broader set of resources to assist us in our roles. They should be used to supplement, not replace, traditional methods of problem-solving and decision-making.
The output of AI tools should always be supplemented with business logic. For instance, if an AI tool generates a suggestion or plan, users should critically evaluate the suggestion using their understanding of the company's business model, strategy, and market conditions.
Furthermore, collaboration with colleagues is encouraged to gain different perspectives, double-check the AI tool's outputs, and reduce the risk of errors.
Additionally, we should appropriately validate the output of AI tools. This may involve cross-verifying the information with other reliable sources, conducting rigorous testing if feasible, or consulting experts when necessary.
Using AI tools as a supplement ensures that we retain human judgement and oversight in our processes, thereby maximizing the value of these tools while minimizing the associated risks.
When using AI tools, we must always be aware of the inherent risks these technologies pose. These risks may include potential inaccuracies or misinterpretations in AI-generated content due to lack of context, legal ambiguities regarding content ownership, and possible breaches of data privacy. Therefore, we need to maintain a critical attitude towards AI outputs at all times.
To effectively manage these risks, it is the responsibility of management to integrate AI-specific risk assessments into our broader risk management procedures. This involves continually evaluating and updating our protocols to identify, assess, and mitigate potential risks, considering changes in AI technology, its application, and the external risk environment. Additionally, periodic training and awareness sessions for us are necessary to keep us informed about these risks and the steps needed to address them.
We should exercise caution when using third-party AI platforms due to potential security vulnerabilities and data breaches. Before using any third-party AI tool, we must verify the platform’s security by checking for appropriate security certifications, reviewing the vendor's data handling and privacy policies, and consulting with our IT or cybersecurity team if needed.
Additionally, any data shared with third-party platforms must comply with the guidelines outlined in section 2, Data Security and Confidentiality. If we are unsure about using a third-party platform, we should seek guidance from our supervisors or the IT security team.
Whenever the use of AI is proposed, content creators, content curators or product teams must first consider whether both the deployment of AI in principle and the specific product or tool is appropriate for the task it is being required to do.
They should also be aware that AI may also be integrated into tools provided by external suppliers or tools that are openly available on the internet.
Any use of AI by FxScouts in the creation, presentation or distribution of content must be consistent with the Editorial Guidelines, including the principles of impartiality, accuracy, fairness and privacy.
Any use of AI by FxScouts in the creation, presentation or distribution of content must include active human editorial oversight and approval, appropriate to the nature of its use and consistent with the Editorial Guidelines.
For example, oversight of a recommendation engine may be at a high level to ensure that its output is consistent with the Editorial Guidelines. But where an AI is used in data analysis for a journalistic project, human oversight should engage with the detailed output.
In all cases, there must be a senior editorial figure who is responsible and accountable for overseeing its deployment and continuing use. Editorial line managers must also make sure they are aware of and effectively managing any use of AI by their teams.
Any use of AI by FxScouts in the creation, presentation or distribution of content must be transparent and clear to the audience. The audience should be informed in a manner appropriate to the context and it may be helpful to explain not just that AI has been used but how and why it has been used.
The outcomes produced by AI are determined by both the algorithm behind it and the data that it has been trained on. Both the algorithm and the training data may introduce biases or inaccuracies into the outcomes of the AI.
Any proposed use of any AI must consider whether any inherent biases affect its deployment by FxScouts and therefore whether it is an appropriate tool.
Generative AI operates by predicting likely responses to queries or instruction, based on the nature of its algorithm and training data - rather than providing content or answers that are necessarily factually accurate.
Any proposed use of generative AI must consider the potential that content presented as accurate, may in reality be a creation of the algorithm and be a ‘hallucination’ or a fabrication with no basis in fact.
Similarly, generative AI may simply adapt content from a web search or from a database of trusted content and present it as original.
Any proposed use of AI must consider the potential that content presented as original may in reality be plagiarised or mimicked.
FxScouts also has a responsibility to consider not only the rights of creators and artists in its use of AI but also not to jeopardise the role that they play in the wider creative community. Any use of AI must consider the rights of talent and contributors, while also allowing for the creative use of new forms of expression.
Compliance with this policy will be monitored regularly by FxScouts. Any policy breaches identified will be addressed and remedied promptly.
This policy will be reviewed and updated periodically to accommodate changes in technology, business needs, or legal requirements.