Self-Assessment

Is your SME ready to govern AI?

A 35-question self-assessment to measure your AI governance maturity and identify your next concrete actions.

≈ 30 minutes
7 dimensions
35 statements
0 / 35 answered

Scoring scale

0 Not in place
1 In progress
2 In place, partial
3 In place, formalized
1

AI Usage Framework

Have you defined what is allowed, what is supervised, and what is prohibited?

Our organization has a written executive statement on AI usage.

We have an explicit list of AI tools approved for professional use.

A person is formally responsible for AI governance matters.

We know which AI uses are happening in our organization (including unofficial ones).

Our statement is reviewed at least once a year.

2

Personal Information Protection

Are you compliant with privacy laws in your use of AI?

We have designated a privacy officer (DPO/RPRP).

The privacy officer is identified on our website and reachable by affected individuals.

We have an inventory of personal information we hold.

Our employees know they must never paste personal information into unapproved AI tools.

For automated decisions, we inform affected individuals and allow them to make representations.

We conduct a privacy impact assessment (PIA) before any new AI project involving personal information.

3

Data Sovereignty and Residency

Do you maintain control over your data?

For each approved AI tool, we know precisely where data is physically hosted.

For sensitive or personal data, we use tools hosted in Canada.

We have a written contractual guarantee that our data is not used to train models.

We have a matrix mapping data classification to permitted tools.

We have assessed our exposure to third-party jurisdictions (CLOUD Act or equivalent) for our AI tools.

4

Security and Supply Chain

Have you applied your security practices to AI tools?

Strong authentication (MFA) is enabled on all our professional AI tool accounts.

Our employees use only professional accounts (never free personal accounts) for work.

We evaluate the security posture of our AI vendors before signing.

We have a documented incident management procedure for AI tools.

Access to AI tools is promptly revoked when an employee leaves.

5

Risk Management by Use Case

Do you assess the risk of each use case before deploying it?

We have a registry of AI systems used in the organization.

Each AI use case is classified by risk level (low / moderate / high / critical).

Control measures are proportional to the risk level (not uniform).

No client deliverable is sent without human review, regardless of the tool used.

For decisions affecting individuals, we have a review and appeals mechanism.

6

Intellectual Property

Are you protecting your IP in your use of AI?

We have read and understood the terms of use of the AI tools we approve.

Our source code and trade secrets never pass through tools whose terms do not guarantee confidentiality.

Our client contracts clarify ownership of deliverables produced with AI assistance.

We disclose AI usage to clients when that usage is material.

7

Training and Culture

Are your teams equipped to make the right choices daily?

All our employees have received AI training in the past year.

AI training is mandatory before receiving access to approved tools.

Our managers have received additional training on AI governance.

Employees know who to contact when they have doubts about an AI use.

Our culture encourages employees to report incidents or problematic uses without fear of reprimand.

Answer all statements to see your results