An image is not just a rectangle of colored dots, and as important as the visual content of imagery might be, there is another component becoming even more transformative: the data that lives inside an image file or is associated with an image provides a treasure trove of meaning and context. It is impossible to overstate how important this is in the language of photography.
The connected data is often integral to understanding the meaning of an image. Making use of the connectivity that surrounds an image will be an essential part of leveraging all visual media. As such, we need to make sure we understand the types of data that may be connected to a media object.
Let ’s take a quick look at the information that may be embedded in an image or connected to it somehow. We’ll start with data created at the time of capture and then data that can be added as the image moves through the internet.
Embedded date and location - Images created with a smartphone will typically include the time of capture and the GPS coordinates of the camera. This can become a key to all kinds of other data.
Embedded device (and therefore photographer) identifiers - Most images will contain an identifier for the digital camera that made the picture. And it’s a pretty easy step to correlate the device serial number with the person who owns the device (who is typically the photographer).
History of sharing and publication - As a person takes action with the photo, additional data is created. The act of selecting and posting to social media services is both an act of curation (e.g., “I want to share this picture”) and an expression of some kind of intent (e.g., “The person in this picture is my friend”).
User-created text and tags - The text that accompanies a posting or sharing of an image tells us even more about the subject of the photo and the intent of the photographer. (Teaser: We think this is an area where we can offer some very cool new tools to help capture intent.)
AI-created tags - It is increasingly common for images to be processed at some point by Artificial Intelligence services. This may be internal to the social media or DAM service, or it may be user-initiated. This information will continue to increase as new services reprocess old images.
Graph - A set of internet relationships can be illustrated by a set of lines that show connectivity. While we usually think of internet graphs to describe relationships between people, graphs can also describe images as they are seen, liked, commented on, and shared.
Linked data - All of the above information can also link to other information (e.g., date and location might pinpoint a known event like a football game). This linkage can come in many forms, including some that may be invisible to you. We should also expect that this linkage will increase as time moves on, as more linkage comes online for people, places, events, and media objects.
One advantage of cloud-based DAM systems is that they are much better positioned to use data connectivity. Cloud-native objects have the opportunity to create and make use of connectivity that is highly impractical for on-premise systems.
Though there is a lot of data that can be associated with an image, much of this is out of reach (and will become available unevenly).
AI tagging is coming to market quickly and will improve constantly.
Linked data services are less mature. Some of the geodata linkage is already here, like place name lookup. ImageSnippets is an interesting example of a linkage between images and DBpedia entries.
There are some beta services providing graph services for the open web, but they are very new.
Social media graphs will remain controlled by the services and will only be doled out as they find a reason to in their business models.
Next week we’ll look at some of the ways rich media and computational imaging are changing the way we communicate.
This post is adapted from The DAM Book 3.0, which lays out these principles comprehensively.