AI safety

Artificial intelligence is advancing at unprecedented speed, with systems that are more powerful, more autonomous, and more deeply integrated into society than ever before. This brings enormous opportunities – but also serious risks. From misinformation and bias to cybersecurity threats and the long-term challenge of aligning advanced AI with human values, ensuring the safe development and use of AI is becoming one of the defining issues of our time.AI safety is about more than preventing harm – it is about building trust, transparency, and resilience into intelligent systems. This track explores how we can design AI that serves humanity safely and responsibly, ensuring progress and human values go hand in hand.

Challenge 01

Aligned intelligence

Challenge in collaboration with TZAFON.

Create an AI-powered solution that stops large-scale misuse of headless browsers without getting in the way of real automation or human users. Today, it’s easy for one person to launch thousands of AI-controlled browser bots. These bot swarms can slip past current “human checks,” overload systems, and cause serious harm. But at the same time, many teams depend on AI agents and scripts to do real, helpful work in a browser.

Assignment

Your challenge is to design a gatekeeper layer that:

  • Detects and paralyzes suspicious patterns (e.g. 1,000 near-identical AI sessions).
  • Allows “benign” AI agents to continue operating when they follow expected, safe behaviour.
  • Gives humans a simple, low-friction way to prove they’re legitimate and move on
  • Keeps a clear audit trail that prompts a feedback loop which continues to improve the strength of the “gatekeeper” over time.
  • Continues to track all forms of traffic to detect changes in behaviour

Output

  • A prototype (service, middleware, dashboard, or guardrail layer) that sits in front of a headless browser / agent runtime / web server and actively distinguishes and handles abusive bot swarms vs. legitimate AI agents and human users.
  • A short video (1–3 minutes) that explains your solution, the abuse scenario it targets, how it detects or throttles harmful automation while keeping “good” traffic flowing, the potential impact it could have in real systems, and methodology of the feedback loop.
  • A GitHub link with a README describing what was actually built during the event (architecture, key signals, and policies), and/or a URL to a live or recorded demo showing the system in action.

Data sources and tools

For this challenge, you don’t need real company data. You can:

  • Create simulated traffic that mixes normal users, helpful AI bots, and abusive bot swarms (e.g. mass logins or form spam).
    Use a headless browser runtime to let bots and scripts browse a test site while you log what they do (pages visited, clicks, timing).
    Apply simple rules or risk scores to that behaviour so low-risk sessions pass through, while suspicious swarms are slowed, challenged, or blocked.

You may also draw inspiration from any public examples or datasets of bots and online abuse.

Data sources

Approaches

Below are example approaches to spark ideas. Feel free to explore any solution that tackles the challenge effectively.

  • SwarmShield: Build a system that spots groups of near-identical sessions (thousands of AI-driven browsers acting the same way) and automatically slows, challenges, or blocks them.
  • TrustGate: Create a lightweight gate that gives real users and approved bots a simple way to prove they’re legitimate (for example, a one-click confirmation or signed token) while unknown traffic faces stricter limits.
  • BehaviorWatch: Design a monitor that tracks how each session moves through a site (speed, clicks, paths) and flags patterns that don’t look human or safe for extra checks.
  • Benchmarks and evals: Design and build different types of benchmarks and evaluation criteria to try different safe solutions
Challenge 02

AI Safety in real-world physical environments

How do we build Physical AI systems that don’t just operate powerfully in the real world — but do so safely, transparently, and in alignment with human intentions?

In this challenge, you’ll use NVIDIA’s Video Search and Summarization (VSS) blueprint to design a Physical AI safety solution that addresses a real risk in a physical environment.

Choose one focused AI safety or municipal/societal problem. Show how a video-intelligent system can help a space perceive, understand, and reason about what’s happening — and then support safer actions or decisions. That could mean preventing accidents, increasing transparency in public systems, reducing bias in human-in-the-loop workflows, or strengthening oversight.

You don’t need to solve everything at once — start with something focused, meaningful, and grounded in real-world impact. The goal: a future where machines and autonomous systems moving through our world do so safely, predictably, and in harmony with the people they serve.

Assignment

Design and prototype a Vision AI Agent, using NVIDIA’s Video Search and Summarization (VSS) blueprint and a lovable UI.

Detailed Tasks:

  1. Deploy the Blueprint: Use the provided instructions to deploy the core VSS blueprint.
  2. Customize it for the use case: Adapt the VSS blueprint's processing and output for your chosen Physical AI safety challenge.
  3. Connect fronted UI to Lovable: Integrate your customized solution with a user-friendly, lovable UI for demonstration.

Output

Participants are encouraged to demonstrate the idea in action by:

  • Concept Design & Architecture: A diagram or explanation detailing how video data flows, is processed, and leads to a safety-critical decision/insight.
  • A short video: A working prototype (can use simulated video data or synthetic data) demonstrating the customized blueprint with a lovable UI for your chosen focus area.
  • Safety & Ethics Rationale: A brief explanation of how your system addresses bias, improves transparency, and upholds the principles of Physical AI Safety.
  • GitHub link: a README describing what was actually built during the event, and/or a URL to the demo.

Data sources and tools

Specific instructions on how to access technical resources, including credits, compute power, and access to the NVIDIA VSS blueprint, will be provided via email when the Fixathon starts.

Data sources

Approaches

Below are example approaches to spark ideas. Feel free to explore any solution that tackles the challenge effectively.

  • Forest Watch: AI that turns drone footage into instant summaries of deforestation, storm damage, and wildlife activity.
  • City Incident Tracker: Automatically detects and summarises municipal incidents like accidents, flooding, and vandalism.
  • Workplace Safety Auditor: Identifies near-misses and safety violations in industrial and logistics environments from video.
  • Transit Flow Analyzer: Summarises crowding, accessibility issues, and incidents across public transport stations and vehicles.

Jury

Erik Guander
Applied AI & Partnerships, Tzafon
Anne-Marie Eklund Löwinder
Cybersecurity Expert
David Frykman
General Partner, Norrsken VC
Max Larsson
Ecosystem builder, Founders House
Linda Malm
Regional Manager Enterprise Nordics, NVIDIA
Daniel Albertsson
Chief Digital Innovation Officer, Advania Sweden

Prizes

Prizes will be announced closer to the event.

Norrsken Logo
Close Cookie Popup
Cookie Preferences
We use cookies to make our site work smoothly and understand how you interact with it. Curious? Check our Privacy Policy.
Strictly Necessary (Always Active)
Cookies required to enable basic website functionality.
Cookies helping us understand how this website performs, how visitors interact with the site, and whether there may be technical issues.
Cookies used to deliver advertising that is more relevant to you and your interests.
Cookies allowing the website to remember choices you make (such as your user name, language, or the region you are in).