How authenticated photographs travel from camera sensor through manufacturer validation to permanent blockchain storage—surviving all metadata loss.
When you press the shutter, the camera creates cryptographic proof linking the image to physical hardware—generating a complete authentication packet ready for submission.
Every camera sensor has unique manufacturing imperfections at the microscopic level, creating what's called a Non-Uniformity Correction (NUC) map. These imperfections are as unique as a fingerprint. During manufacturing, the camera stores a cryptographic hash of this NUC map in its Secure Element—a tamper-resistant chip that cannot be cloned or modified. This hash becomes the camera's unforgeable hardware identity.
For smartphones that don't have factory calibration, the approach is slightly different. When you first launch the authentication app, it captures the sensor's Photo Response Non-Uniformity (PRNU)—a stable noise pattern from silicon imperfections. The app uses this PRNU as entropy to generate a cryptographic keypair, stores the private key in the device's secure enclave (Android StrongBox or iOS Secure Enclave), then permanently discards the PRNU pattern. From that moment forward, the keypair serves as the phone's permanent device identity.
The moment you press the shutter, two parallel processes begin simultaneously. The first process handles image hashing. The sensor captures light into raw Bayer data, which gets split via Direct Memory Access into two paths: one path goes to the Secure Element which computes a SHA-256 hash of the raw data, while the other path goes to the Image Signal Processor which converts the raw data into a JPEG or HEIC file, saves it to storage, and then computes a SHA-256 hash of that processed image. The result is two hashes—one proving what the sensor captured (unmodified), and one for the image you'll actually share.
The second process generates the camera token. The Secure Element retrieves the stored NUC hash, then randomly selects one of three key tables assigned to this camera during manufacturing. From that table, it randomly selects one of 1,000 keys and uses AES-256-GCM encryption to encrypt the NUC hash. This creates a camera token containing the encrypted NUC hash (ciphertext), an authentication tag for tamper detection, a unique nonce value, and a key reference indicating which table and key index was used. This token is encrypted proof that only the manufacturer can validate—and they can validate it without ever seeing your image.
The camera now assembles a complete submission packet of approximately 411 bytes containing three distinct components. The first component is the Birthmark Record, which includes the two image hashes (raw and processed), their modification levels, and optionally some hashed metadata like timestamp, location, or owner information. This is the data that will eventually be stored on the blockchain.
The second component is the Manufacturer Certificate, which identifies which manufacturer should validate this submission (like "CANON_001"), contains the encrypted camera token we just created, and includes the key reference needed for decryption. Critically, this certificate contains no image hashes—the manufacturer will validate the camera without ever seeing what you photographed.
The third component is the Camera Signature—the camera uses its private key stored in the Secure Element to sign the manufacturer certificate. This cryptographic signature binds the certificate to this specific camera, preventing anyone from tampering with the packet or mixing components from different submissions.
Before sending the packet, the camera queries a registry at registry.birthmark.org to get the current list of available submission servers. This list is updated every 24 hours. The camera selects three geographically diverse servers from different regions (Americas, Europe, and Asia-Pacific), deliberately excluding the busiest 25% to balance load. It then randomly picks two servers from this set and submits the authentication packet to both of them simultaneously.
Why two servers? The system uses a dual-approval protocol. For an image to be posted to the blockchain, it must be independently validated by two different submission servers. This prevents fraud—a single compromised server cannot forge authentication records because it can't produce the second independent approval.
The authentication packet travels through multiple independent validators—manufacturer authority, dual submission servers, and blockchain validators—before becoming a permanent record.
When a submission server receives an authentication packet, it follows a strict validation protocol. First, it verifies the camera signature to ensure the packet came from a legitimate device and hasn't been tampered with. It checks that the signature is valid and that the certificate hash matches the accompanying birthmark record. If these checks pass, the server extracts the manufacturer certificate and routes it to the appropriate manufacturer authority for validation.
Here's where the architectural separation becomes critical: the submission server can see the image hashes in the birthmark record, but it cannot decrypt the camera token—that encrypted data is opaque to the server. Meanwhile, when the manufacturer receives the certificate for validation, they only see the encrypted token and key reference. They never receive the image hashes, timestamps, or even information about which server sent the request. Each party sees only fragments of the complete picture.
The server waits for the manufacturer's response. If the response is FAIL (indicating an unknown camera or tampered token), the server rejects the entire submission. If the response is PASS, the server continues to the next critical step: coordinating with a second independent server for dual approval.
The manufacturer's validation process is deliberately stateless and privacy-preserving. When they receive the encrypted token and key reference, they look up the appropriate decryption key from their secure key table database. Using that key, they decrypt the token with AES-256-GCM to reveal the NUC hash. They then query their production database with a single question: "Is this NUC hash from one of our legitimate cameras?" The database returns either yes or no, without revealing which specific camera it is.
The manufacturer also verifies the authentication tag to ensure the token hasn't been tampered with since creation. Based on these checks, they return a simple response: PASS if this is a legitimate camera, FAIL if it's an unknown camera or if tampering is detected. Importantly, the manufacturer doesn't log device identifiers or validation timestamps. Even if someone gained access to their entire database, they would only learn the population of legitimate cameras that exist—not which specific devices authenticated which images, or when any particular camera was active.
Before anything gets posted to the blockchain, the system enforces dual approval. Both servers that received the camera's submission must independently validate with the manufacturer and both must receive PASS responses. Each server then creates a validation proof containing their server identifier, their cryptographic signature over the birthmark record hash, and the manufacturer's PASS signature.
When these validation proofs are submitted to the blockchain validators, the blockchain nodes verify several critical properties:
Only when all these conditions are met does the blockchain accept the record. This architecture means that a single compromised server cannot forge manufacturer approval (because signatures are bound to specific server identifiers), and a compromised manufacturer cannot post records directly to the blockchain (because they don't have access to the image hashes and can't produce the required dual server approvals).
After successful dual-approval validation, only the essential birthmark record gets stored on the blockchain—approximately 153 bytes. The manufacturer certificate and validation proofs (another ~350 bytes) served their purpose in the validation process and are discarded. The blockchain record contains just what's needed for public verification: the image hash, modification level, parent hash if it's an edited image, any metadata hashes if provided, the identifiers of both posting servers, and a processing timestamp rounded to 10-minute intervals for privacy.
This storage efficiency is crucial for sustainability. At one million authentications per day, the blockchain grows by only 56 gigabytes per year. Validator node costs remain under $200-350 annually, making operation feasible for nonprofit journalism organizations. The system uses GRANDPA consensus (GHOST-based Recursive Ancestor Deriving Prefix Agreement) which requires 67% validator agreement for block finality. The network needs a minimum of 4 validators to tolerate 1 Byzantine fault, though 10 or more validators is recommended for stronger fault tolerance (tolerating up to 3 Byzantine failures).
When photographers edit images in software like Lightroom or Photoshop, the editing application can create authenticated records for the edited versions by including deviation validation data. The software extracts the list of operations performed (like "Exposure +1.5 stops, Crop to 16:9"), then takes 100 random 64x64 pixel patches from the original image. It applies the reported operations to these patches to simulate what the result should look like, then compares the actual edited image to this simulation.
The deviation score represents the percentage of pixels where the RGB distance exceeds a threshold. Standard photojournalist edits like exposure adjustment, white balance correction, denoising, and cropping typically produce deviation scores of 8-12%, which manufacturers classify as "Validated" content. Content modification operations like clone stamp, object removal, or generative fill typically produce deviation scores exceeding 40%, classified as "Content Modified."
The key distinction is that deviation measures the difference from the expected result given the reported operations, not the difference from the original. Standard editing workflows produce low deviation scores because the validation service can accurately replicate those adjustments. Content alteration tools cannot simulate expected results because the modifications aren't predictable from operation parameters alone—removing an object requires knowing what was removed and how the gap was filled, information that's not available from the operation parameters.
Once an image hash is recorded on the blockchain, anyone can verify authenticity—no account, no subscription, no gatekeepers required.
Public verification is straightforward and privacy-preserving. When you have an image you want to verify—whether it's a JPEG, PNG, HEIC, or any other format—you can either upload it to a web verifier or hash it locally on your own device. The verifier calculates the SHA-256 hash of the image bytes, then queries the blockchain with a simple request: "Do you have a record for this hash?"
If the blockchain finds a matching record, it returns the stored information: the modification level (indicating whether it's raw sensor data, validated processed content, or modified content), the processing timestamp, the parent hash if it's an edited image, and the complete provenance chain if multiple edits occurred. If the blockchain doesn't find a matching record, it simply returns "No record found"—meaning either the image was never authenticated, or authentication hasn't propagated to the blockchain yet.
This query process is completely stateless from a privacy perspective. The blockchain query reveals nothing about you as the verifier—it doesn't know who's asking, where you're located, or why you're checking this particular image. And importantly, the blockchain record itself contains no information about the photographer's identity, the camera serial number, the capture location, or the manufacturer. You're simply asking "Is this image hash in the registry?" and getting back technical authentication metadata.
Level 0: "Validated Raw" indicates raw sensor data from a verified camera with no processing beyond sensor readout. This is maximum authenticity for forensic analysis and evidentiary purposes. You'll typically only see this level when working with raw image files in professional contexts.
Level 1: "Validated" means this is a processed image (JPEG or HEIC) from a verified camera with standard in-camera processing applied—white balance, sharpening, compression. This is the standard authentication level for photojournalism and social sharing, representing what you'd expect from a camera's normal operation.
Level 2: "Modified" indicates the image was edited in software like Lightroom or Photoshop, but it links back to a validated original through its parent hash. This classification maintains chain of custody while being transparent about post-capture editing. It's appropriate for edited photos that preserve their connection to authenticated hardware capture.
"Not Found" means no blockchain record exists for this image hash. This could mean the image was never authenticated, authentication failed for some reason, or the blockchain simply doesn't have this record yet. Unknown provenance should be treated with appropriate skepticism.
For edited images with modification level 2, verifiers can trace the complete edit history back to the original capture. Each edited version includes a parent hash pointing to the previous version. When you verify an edited crop, you might see it has a parent hash. Querying that parent hash might reveal it was also edited (level 2) with its own parent hash. Querying that hash might finally reveal the original raw capture (level 0) with no parent. This complete chain—raw to first edit to crop—provides transparency about the entire editing workflow while maintaining cryptographic proof that all edits ultimately derive from authenticated hardware capture, not AI generation.
The fundamental advantage of the Birthmark approach is that authentication records are stored on an independent blockchain rather than embedded in image files. When you share an image on social media, the platform compresses it, converts formats, and strips all metadata. With C2PA and similar embedded solutions, approximately 95% of credentials are lost in this process. But with Birthmark, the image file can be compressed, converted, cropped, re-saved, screenshot—and the hash remains verifiable by querying the blockchain.
The limitation is that if pixel values actually change—from Instagram filters, significant compression artifacts, or editing—then the hash changes and you need to query the parent hash instead. This is why the parent hash linking is critical for edited images. The system can't protect against pixel changes, but it can maintain the chain of custody through those changes.
Multiple paths exist for verification to ensure accessibility. You can upload images to the web verifier at birthmarkstandard.org/verify. A browser extension lets you right-click any image to verify it in place. Privacy-conscious users can use the command-line tool to hash locally and query the blockchain directly. There's a GIMP plugin for integrated verification within photo editing workflows. Developers and automated systems can query the blockchain directly through the API. All these tools are open-source, meaning anyone can audit the verification logic or run their own verification infrastructure.