How Iridology AI Works: Our Methodology
A technical walkthrough of the computer vision pipeline behind Iridology AI — from image capture to pattern classification. We explain what the system does, how it does it, and where the known limitations lie.
The Analysis Pipeline
Five stages from image upload to wellness report
When you upload an iris photo, it passes through a sequence of five stages. Each stage has a specific job, and the output of one stage feeds into the next. The entire process runs in a few seconds on our server.
1. Image Capture and Quality Assessment
The first thing the system does is check whether the image is usable. A quality assessment module evaluates several properties: resolution (minimum pixel count for the iris region), sharpness (Laplacian variance), lighting uniformity, and occlusion — how much of the iris is visible versus blocked by eyelids, eyelashes, or reflections. If any metric falls below the threshold, the image is rejected with a specific reason so you can retake it.
This step matters because downstream processing assumes a clean iris image. Feeding a low-quality image into segmentation produces poor results, and those errors compound through the rest of the pipeline. The quality gate is conservative by design — it occasionally rejects images that a human iridologist could work with, but it reduces the chance of producing a misleading report.
2. Iris Segmentation
Segmentation is the process of locating the iris within the eye image and separating it from the surrounding sclera, pupil, and eyelids. Our approach combines classical computer vision with learned components. The boundary between the iris and sclera (outer boundary) and the boundary between the iris and pupil (inner boundary) are detected using a method derived from the Daugman integro-differential operator — a mathematical formulation that searches for circular edges by computing gradient maxima across radial integration paths.
In practice, we run the classical operator as an initialization step and refine the boundary with a convolutional neural network trained to predict per-pixel iris masks. This hybrid approach handles cases where the classical method struggles — off-angle gazes, non-circular iris boundaries, or partially visible irises. The segmentation mask output from this stage becomes the input to zone mapping.
3. Zone Mapping
Once the iris region is isolated, it needs to be divided into zones that correspond to specific body systems. Iridology charts assign different regions of the iris to different organs and systems — the digestive system at certain clock positions, the nervous system at others, and so on. We implement this by first normalizing the iris into a rectangular image using a rubber-sheet model (similar to the approach in Daugman's biometric iris recognition work), then overlaying a polar grid where each sector maps to a body system.
The zone chart we use is based on widely referenced iridology mapping conventions. We acknowledge that these mappings are not universally standardized — different iridology schools place organs in slightly different positions. Our implementation follows one well-documented convention consistently, which means the system is internally consistent but does not represent a single universal standard.
4. Feature Extraction
Feature extraction is where the raw pixel data within each zone is converted into numerical descriptors that the classification stage can use. We employ multiple feature extraction methods in parallel: Gabor filter banks for texture patterns, Local Binary Patterns (LBP) for rotation-invariant texture descriptors, and HSV color analysis for pigmentation patterns relevant to iridology observations.
The output of this stage is a feature vector per zone: a compact numerical summary of the texture, structure, and color characteristics within that region of the iris.
5. Pattern Classification and Report Generation
The final stage takes the per-zone feature vectors and produces the wellness report you receive. A trained classifier scores each body system zone on a scale that reflects the degree of deviation from a baseline texture profile. These scores are then mapped to plain-language descriptions — explaining what the system observed in each zone without making diagnostic claims.
The classification model was trained on a curated set of iris images annotated by practitioners according to iridology chart conventions. We do not claim that the classifier has been validated against clinical outcomes. It learns to reproduce the patterns that iridologists associate with wellness observations, which is a different thing from confirming that those associations reflect underlying physiology. This distinction is important and we return to it in the research section below.
Research Basis and Limitations
What the evidence supports and where it falls short
What the Research Shows
The computer vision components of our pipeline — segmentation, feature extraction, texture analysis — are well-established in the academic literature. Iris recognition for biometric identification has been studied extensively since the 1990s, and techniques like the Daugman integro-differential operator, rubber-sheet normalization, and Gabor-based feature extraction have decades of published validation behind them. Our use of these techniques for iris pattern description (rather than identity verification) is a straightforward adaptation of proven methods to a different problem domain.
What the Research Also Shows
The scientific evidence for iridology itself — the idea that iris patterns systematically reflect the state of internal organs — is mixed. Some published studies report correlations between certain iris signs and specific health conditions, while others find no statistically significant relationship. A 2005 systematic review published in the Journal of General Internal Medicine examined three controlled trials and concluded that iridology was not a useful diagnostic tool. More recent work, particularly from researchers in Eastern Europe and East Asia, has produced more favorable findings, but the overall body of evidence remains contested.
We are transparent about this: the research does not uniformly support iridology's core claims. Our system operates within the framework of iridology as a complementary observation practice, not as a validated medical diagnostic. The patterns it detects in the iris are real — the question of what those patterns mean for your health remains an area of active debate in the scientific community.
Our Approach to Validation
We do not claim FDA clearance, CE marking, or clinical validation. We continuously evaluate our system's performance against practitioner-annotated datasets and track inter-rater agreement between the model and human iridologists. Our internal benchmarks measure consistency — whether the system produces the same classification when shown the same image — rather than clinical accuracy, because clinical accuracy would require outcomes data we do not have. We believe being transparent about what we can and cannot validate is more honest than making inflated claims.
Data Privacy and Security Architecture
How we protect your biometric data at every stage
Iris images are biometric data, and we treat them accordingly. Here is how the privacy pipeline works:
Transit encryption
All uploads use TLS 1.3. The image is encrypted before it leaves your device.
Processing and discard
The server runs the analysis pipeline, extracts features, generates the report, and then deletes the original image. The feature vectors are anonymized and do not allow reconstruction of the original iris.
No third-party sharing
Your images are not sold, shared, or used to train external models. Reports are stored in your account and are not accessible to other users.
Right to deletion
You can delete your account and all associated data at any time. Deletion is permanent and irreversible.
Our full privacy policy covers the legal and technical details. The short version: we process your image, give you the report, and get rid of the image. We do not build a biometric database.
Open Questions and Future Directions
Active areas of ongoing research and development
Several areas of our methodology are active subjects of ongoing work:
Multi-session tracking
We are building the ability to compare iris scans taken at different times, which would allow users to observe changes in zone scores over weeks or months. This requires careful handling of imaging variations (different cameras, lighting conditions, distances) that can confound longitudinal comparison.
Classification calibration
Our current model produces scores that are internally consistent but not calibrated against external benchmarks. We are exploring calibration techniques to make the numerical scores more interpretable and comparable across users.
Expanded feature extraction
We are investigating additional texture descriptors beyond Gabor and LBP, including deep feature embeddings from pre-trained convolutional networks. The trade-off is interpretability — deep features often perform better but are harder to explain, which runs counter to our goal of producing understandable reports.
Independent validation
We plan to pursue independent evaluation of our system by researchers outside our team. We believe external scrutiny is essential for any tool that makes claims about health-related observations, and we welcome collaboration with academic groups interested in studying our approach.
These are not promises — they are areas we are actively working on. Some may ship soon, others may take longer, and some may not pan out at all. We will update this page as our understanding evolves.
Frequently Asked Questions
What type of neural network does Iridology AI use?
Our pipeline uses convolutional neural networks (CNNs) for iris segmentation and pattern recognition. We employ a modified architecture inspired by the Daugman integro-differential operator for boundary detection, combined with multi-scale Gabor filter banks for texture feature extraction. The classification stage uses a fully connected network trained on annotated iris images mapped to iridology zone charts.
How accurate is the iris segmentation?
Iris segmentation — the process of isolating the iris from the sclera and eyelid — achieves high pixel-level accuracy on well-lit, forward-facing images. Performance drops on images with heavy occlusion from eyelids or eyelashes, or in low-light conditions. Our quality assessment step rejects images that fall below a sharpness threshold, but this is a limitation: some valid scans are rejected, and some borderline images may pass despite imperfect segmentation.
Is Iridology AI FDA cleared or CE marked?
No. Iridology AI is a wellness tool and does not have FDA clearance, CE marking, or any medical device certification. We are transparent about this: our system analyzes iris patterns according to iridology chart conventions and generates wellness-oriented reports. It does not diagnose, treat, or prevent any disease. We encourage users to treat our reports as complementary observations alongside professional medical advice.
What happens to my iris image after the scan?
Images are processed server-side and are not stored permanently in their original form. The feature extraction pipeline converts the iris image into a numerical feature vector, and the original image data is discarded after the report is generated. We do not share images with third parties. Our full privacy policy details the technical measures we use, including TLS encryption in transit and restricted access on the processing server.
How does zone mapping work in practice?
Zone mapping is the process of dividing the iris into regions that correspond to specific body systems according to iridology charts. After segmentation, the iris is unwrapped into a normalized rectangular representation (similar to the rubber-sheet model used in biometric iris recognition). Each zone in this normalized space maps to a body system — for example, the area around 7 o'clock corresponds to the digestive system in most iridology charts. Feature values within each zone are aggregated and scored, producing the per-system results you see in the report.
Ready to get started?
Upload your iris photo for a comprehensive AI-powered health analysis.
Upload Your Photo