You need to agree to share your contact information to access this dataset

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

Log in or Sign Up to review the conditions and access this dataset content.

Blind and Low vision Individuals Networks and communities (BLINC) Dataset Card

Dataset Description

Dataset Summary

This dataset was created as part of a research collaboration project that seeks to reduce the AI divide for marginalized communities by improving the representation of people with disabilities in text-to-image model outputs. Specifically, the data set captures important representation themes and subthemes for the community of individuals who are visually impaired through 401 images and corresponding metadata. The images are real-world photographs that primarily show individuals or groups of visually impaired people. The dataset was developed to demonstrate to AI systems/models how blind and low-vision persons define “good representation” of their community within imagery. The dataset seeks to capture the diversity of the visually impaired community by representing a wide range of experiences and contexts. It includes individuals with different levels of vision impairment (blindness, low vision, and partial sight) and showcases the use of various assistive technologies such as white canes, braille devices and screen reader. The images also reflect a broad spectrum of activities, including playing sports, learning in school, using digital tools, farming, working in offices, attending conferences, and participating in cultural events. Finally, the dataset represents multiple age groups children, youth, adults, and older persons and features diverse environments such as homes, schools, workplaces, farms, urban spaces, rural areas, and conferences. The dataset was collected in five counties in Kenya that is Nairobi, Kisumu, Mombasa, Isiolo and Garissa and curated during the period of April to July 2025.

Supported Tasks

text-to-image: The dataset can be used to train, evaluate or finetune text-to-image generation models.

Languages

The annotations within each image are in English. The associated BCP-47 code is en.

Dataset Structure

Data Instances

This is what a typical JSON-formatted example from the dataset looks like): { "theme_folder": { "name": "theme_name", "description": "theme_description", "sub_themes": { "sub_theme_folder": { "name": "sub_theme_name", "description": "sub_theme_description", "images": [ { "path": "path-to-file", "description": "image_description", "prompt": "image_prompt", "annotations": [ { "label": "label_for_box", "box": { "x": 0.25, "y": 0.25, "w": 0.5, "h": 0.5 } } ] }, ] } } } }

Data Fields

The meta data includes: (1) a hierarchy of themes (with name and description) and subthemes (title, name, description) that reflect desired representation aspirations of the community across the entirety of the 400-image dataset. For each image (path) in the dataset, the meta-data further includes: (2) a rationale for “why” an image has been selected as a good instance of representation theme (description); (3) a prompt describing the image (prompt); and (4) 1-5 image text (label) and bounding-box annotations via x, y, w, h dimensions.

Dataset Creation

Curation Rationale

The dataset was curated to support adaptation and evaluation of text-to-image generative models. The dataset is structured around five core themes, each selected through participatory input from the community to reflect important activities, characteristics, and objects that should be represented in AI-generated imagery. Each core theme is further subdivided into sub-themes, providing a hierarchical taxonomy. Within each sub-theme, the dataset includes a curated set of images and corresponding annotations that exemplify the visual and semantic characteristics of the category. This structure enables targeted evaluation of model performance across diverse conceptual domains and supports research into theme-specific generation fidelity, compositional generalisation, and prompt grounding

Source Data

Initial Data Collection

The dataset was developed through a participatory process that combined community workshops, participant-led invitations, and digital crowed sourcing. The project team with a photographer engaged with participants across Nairobi, Kisumu, Mombasa, Isiolo, and Garissa, while additional images were contributed remotely through WhatsApp by participants who wanted to take part but could not be reached in person. The process began with workshops in each county, where visually impaired individuals, caregivers, and representatives of organizations of persons with disabilities came together to discuss what “good representation” of their community should look like in imagery. During these sessions, participants co-created themes and subthemes that guided the dataset, including education, livelihoods, daily routines, sports, cultural life, and the use of assistive technologies. Many participants noted that they had never been photographed while engaged in certain key activities which made the process both meaningful and empowering. Following the workshops, participants invited the project team into their homes, schools, workplaces, farms, and community spaces to take photographs and also share already existing photographs. To broaden participation, some individuals who were unable to host the project team submitted images through WhatsApp, making it possible for the dataset to include perspectives from participants in diverse areas. All images were collected with informed consent, and metadata such as county, age group, activity, assistive technology, setting, and theme was recorded alongside each photograph. At each stage, the project team explained the purpose of the dataset, how the images would be used, and where they would be stored. Consent was sought verbally and in writing, with adaptations made to ensure accessibility. All guardians provided consent on behalf of children, while adults gave consent directly. Participants were informed that taking part was voluntary, that they could choose which images to be shared and that they could withdraw their images from the project at any time. Once the images were gathered, the project team reviewed them to determine which would be included. Selection was based on image quality, alignment with workshop-defined themes, and the need to ensure diversity in age, gender, activity and setting. Any images that were blurry, repetitive, or failed to reflect the values and dignity of visually impaired persons were excluded.

Who are the source data producers?

The dataset was produced entirely by human participants. The images were created under two main ways: A photographer, proficient in disability inclusive matters working with the project team and with visually impaired participants across five counties (Nairobi, Kisumu, Mombasa, Isiolo, and Garissa) to capture photographs. These images were taken in environments such as homes, schools, workplaces, farms, and community spaces and were based on activities chosen by the participants themselves. Image donation, some participants who wished to be represented but could not be reached in person contributed photographs through WhatsApp. These submissions were voluntary and treated as data donations, expanding the reach of the dataset to include more diverse voices and settings. In addition to newly collected material, the dataset also includes a small selection of images that had been gathered during previous project activities with visually impaired individuals. These images were reviewed, repurposed, and integrated into the dataset since they aligned with the themes identified in the workshops and provided valuable examples of representation. The primary subjects of the images are persons who are blind or have low vision, along with some caregivers, peers or community members involved in their daily lives. While the dataset records general attributes such as age group, activity, setting, and use of assistive technology, detailed demographic information about the individuals who are captured in the dataset is unknown this includes also those who contributed images digitally. Participation was voluntary, and individuals were informed of their right to withdraw at any point. While participants were not financially compensated for contributing images, those who attended the workshops received transport reimbursements to support their participation and ensure accessibility.

Annotations

Annotation process

To structure their image library, annotators grouped and categorised images into 5 themes: travel, education, sports, economic activities and assistive devices with 3 to 5 sub-themes (e.g., team sport, competition, individual sporting achievements). Individual image-level annotations Instructions for the creation of the annotations for each image involved the annotators response to the following questions/ tasks: Why did you select this image as a good representation for your community? Edits to an auto-generated prompt to ensure its accuracy in regards to the image and in describing what is important for the community using their preferred language. A set of 1-5 bounding box annotations that either relate to: (i) Objects that are special or specific to the community (e.g., whitecane); or (ii) People and/or animals that are important for your community (e.g., Young woman who has low vision; Service dog wearing an orange vest).

Who are the annotators?

Annotations were made by a single person, who acted as the project lead in the dataset generation process. Demographic or identity information are not provided.

Personal and Sensitive Information

The dataset does not include any directly identifiable personal information. However, the metadata does contain sensitive information that reflects aspects of personal identity. Specifically, the metadata and images capture characteristics such as racial or ethnic origins, gender, and disability status. These attributes are essential to the purpose of the dataset, as it was designed to showcase accurate and diverse representations of persons who are blind or have low vision. In addition, the dataset includes images of children, which increases the sensitivity of the material.

Considerations for Using the Data

Social Impact of Dataset

This dataset was created to reduce the AI divide for marginalized communities by improving how persons who are blind or have low vision are represented in text-to-image models. By making this dataset publicly available, we hope to encourage the development of inclusive technologies that reflect the lived experiences, strengths, abilities and uphold the dignity of persons with visual impairments. Such use could positively impact society by: • Supporting the creation of AI-generated images that portray persons with disabilities more accurately and respectfully. • Helping educators, researchers, and advocates access realistic and representative imagery for training, awareness, and advocacy purposes. • Enabling technology developers to design more inclusive systems and tools that do not exclude persons with disabilities from visual representation. • Contributing to broader social inclusion by challenging stereotypes and ensuring that persons with disabilities are visible in the data that powers AI systems. At the same time, we recognize potential risks associated with the dataset’s use: • If applied out of context or without sensitivity, the images could unintentionally reinforce harmful stereotypes about disability through misrepresentation or misuse. • While no directly identifying personal information is included, the dataset contains sensitive attributes such as disability status, gender, and images of children. • The dataset could be repurposed for applications that do not align with the project’s goals, such as exploitative advertising or entertainment. The inclusion of children with visual impairments in the dataset also presents several potential risks. These include misrepresentation, where children may be portrayed in ways that reinforce stereotypes; exploitation, where their images could be applied in profit-driven or sensationalized contexts outside the intended scope of inclusive innovation; and digital profiling, where sensitive characteristics such as disability status could be inappropriately inferred or tracked. There is also the risk of indirect identifiability, particularly in smaller community settings where children may be more easily recognized even without personal identifiers. Users of the dataset are strongly encouraged to approach it with respect for the community it represents and to prioritize applications that advance inclusion and dignity.

Discussion of Biases

As with any dataset focused on a specific community, there are inherent biases in both the scope and content of this collection. The dataset primarily represents persons with visual impairments and does not capture the wider spectrum of disabilities such as physical, hearing, or cognitive disabilities. This focus was intentional to address a significant gap in AI datasets, but it also means that the dataset is not representative of all disability communities. Geographically, the dataset was collected in five out of 47 counties in Kenya (Nairobi, Kisumu, Mombasa, Isiolo, and Garissa). As a result, it may reflect cultural, social, and environmental contexts specific to Kenya and may not fully represent the experiences of visually impaired persons in other counties, regions or countries. There are also gaps in representation within the dataset itself. While images of children, adults and older persons are included, some subgroups are less well captured. For example, children with multiple disabilities or additional access needs are underrepresented. Similarly, certain settings and activities such as healthcare environments, private family life, or highly specialized workplaces are less visible compared to more general contexts like schools, homes or community spaces. To reduce these biases, steps were taken to ensure diversity in age, gender, activity, and assistive technology use within the visually impaired community. Workshops and community engagement guided which themes and subthemes were prioritized, and participants were encouraged to suggest and contribute images that filled important gaps. Nonetheless, users of the dataset should remain aware that not all experiences or subgroups are equally represented, and the dataset should be interpreted as a partial but meaningful contribution rather than a comprehensive resource

Additional Information

Dataset Curators

This dataset was created by Nicholas Ileve Kalovwe, Daniel Onyango, Zeina Mahmoud, Benson Masero and Olvan Omond in collaboration with Microsoft Research Team.

Licensing Information

This dataset is licensed under the [Creative Commons Attribution-ShareAlike 4.0] license.

Contributions

We gratefully acknowledge the support of special schools for the VIs, teachers, parents, and guardians who facilitated the participation of children, as well as the community members whose contributions shaped the dataset both in workshops and through image sharing. We also recognize the commitment of colleagues within Kilimanjaro Blind Trust Africa who supported participant mobilization and ensured that the dataset could be shared responsibly for inclusive innovation.

Downloads last month
8