A contextual filter is a useful addition to your school filtering system, enabling you to analyse content in real-time, for example content that sits behind a login, or is generated on the fly such as through a generative AI system.
You are not required to have a contextual filter, but either way you must understand the technical limitations of having/not having one, and that your policies and configuration take this into account
Not all filtering systems are able to analyse content in real-time, many systems detect URL's that have previously not been seen, and send them to a centralised system to be categorised. Whilst this is perfectly suitable for public static content, it has a number of limitations:
- Content that sits behind a login will not be accessible to centralised spiders
- Content can be personalised to the user or location, centralised spiders may receive different content to the user
- Content that is generated on the fly cannot be filtered, such as the output from generative AI
Contextual filters enable scanning of the content as it flows across the network, meaning it scans the exact content the user will receive, and can then block or allow as appropriate.
What do I need to do if I don't have a contextual filter?
In the DfE document Meeting digital and technology standards in schools and colleges (Technical requirements to meet the standard), they state:
A review of filtering and monitoring should be carried out to identify your current provision, any gaps, and your students’ and staff’s specific needs.
You need to understand: ... technical limitations, for example, whether your solution can filter real time content
In understanding the technical limitations of your filtering system, you can make appropriate policy and technical adjustments to ensure that an risks posed by web content that your filtering system may not be able to intercept and scan are mitigated.
This may require activities such as:
- Assessing the risks and benefits of allowing the tools that host user content or generate personalised content, such as social media, or generative AI tools
- Assess how risks can be mitigated through the usage of monitoring systems
- Implementing a policy for usage of tools that you approve
- Ensuring that only approved tools are used by the allowed cohorts, by whitelisting those domains and blocking all others in those categories
- Updating monitoring processes to include controls from specific tools, such as a managed generative AI service, to give you visibility on how those tools are used
What do I need to do if I do have a contextual filter?
Even with a contextual filter installed, it may have limitations which you must understand to ensure that your policy meets your technical capabilities. A similar review process as above will need to be undertaken to ensure that the feature is understood and deployed in a safe manner.