Friday, August 26, 2022

More Old Magazines need New Home

Inspired by my recent magazine housecleaning, I'm also making these old technical magazines available to anybody who wants them:

  • IEEE Computer, December 2002-March 2013.
  • Communications of the ACM, January 2005-October 2012.
  • IEEE Software, November/December 2002- March/April 2013.
  • ACM Queue, June 2004-May/June 2008.

Let me know if you have any interest: smeyers@aristeia.com.


 

Wednesday, August 24, 2022

The Beardsley Salome Dinnerware Project, Part 1: Concept to Artwork

In 1989, my friend, Karen, invited me to an exhibition of drawings by Aubrey Beardsley at Harvard's Sackler Museum. I'd never heard of Beardsley or the Sackler Museum, but Karen was working on her Ph.D. in Art History, and I wasn't about to turn down a chance to attend an exhibition with an expert.

The Climax
The Peacock Skirt
My memory of the drawings themselves is hazy, but I clearly remember thinking that Beardsley's black-line artwork would look great on white dinner plates. The Peacock Skirt, for example, seemed like it would make a fine choice, as would The Climax

I wasn't the first to look at Beardsley's drawings and think dishes. In 1979, Poole Pottery introduced its Beardsley Collection, and for its 2020 exhibition, Tate adorned mugs and plates with Beardsley artwork. These days, you can find plates and mugs with Beardsley drawings at ifarfor, though you may want to take into account that it's in Russia before placing an order.

I've long liked the idea of custom-made dinnerware. When the Internet blew the lid off the Pandora's box of product personalization in the 2000s, I started looking for places to make dishes for me. It wasn't difficult to find companies to design and produce limited runs of tableware, but their idea of a short run was a dozen or more place settings, each with five or more pieces. Including set-up fees and minimum order requirements, total costs started at several thousand dollars. That might pencil out for a restaurant, bed and breakfast, private club, or corporate dining room, but for an individual like me? No.

I bided my time. In August, 2021, as part of a spot check into options for made-to-order dinnerware, I discovered that Enduring Images offered custom sets under attractive conditions. From artwork you supply, they create decals using ceramic toners. The decals are applied to "blanks" (i.e., dishes provided by you or by them) and fired in a kiln. The results are functionally indistinguishable from dishware you'd buy at retail. They're no more prone to scratching than mass-market tableware. If the blanks are dishwasher- and/or microwave-safe, the dishes are, too. 

This was exactly what I was looking for. I began work designing a Beardsley dinnerware set.

The Salome Challenge

Aubrey Beardsley's best-known work is probably the 16 illustrations he created for Oscar Wilde's Salome. The Peacock Skirt and The Climax are among them. However, these drawings comprise a mere drop in Beardsley's artistic bucket. His work for Thomas Mallory's Le Morte Darthur makes up several hundred images, for example, and that's still less than half of what he created before he died of tuberculosis at only 25. My initial plan was to use the drawings from Salome for my dinner plates and to select from his other works for the remainder of my set, but I became so intrigued by the Salome illustrations that I decided to use only them.

The decision was partly motivated by the challenge of pulling it off. I had decided I wanted 12 place settings, even though it's nearly inconceivable that my wife and I would host a meal with 10 other people. In addition, I wanted five serving dishes, because I had found a mix of that many platters and large bowls that I thought looked nice. Finally, I wanted every piece to be unique: each plate, bowl, and platter should have its own look. With 12 dinner plates, 12 smaller plates (for e.g., salad or dessert), 12 bowls, and five serving dishes, that necessitated 41 different designs. The fact that Beardsley produced only 16 illustrations for Salome (plus two front cover designs, one back cover design, and a spine design), well, that was part of the challenge. 

Getting the Images

It's easy to find copies of Beardsley's Salome drawings on the Internet. Quality varies, in part because some images are scans of the original drawings, while others are scans of prints made from those drawings. I wanted the best, most authoritative images I could find, so I made a digital bee-line for the Harvard Art Museums. They have nine of the 16 Salome originals.

Scans of these drawings are freely downloadable, but the resolution is terrible. The Peacock Skirt's 745 x 1024 pixels is typical. Printed at 300 dpi, which I consider the bottom of the barrel for print resolution, the image would be about 2.5" x 3.5". That's considerably smaller than I want to put on a dinner plate, much less a serving dish.

Harvard offers higher-resolution imagery, but it's subject to the Harvard Art Museums' licensing policy, which includes this restriction:

Each image must be reproduced in its entirety without cropping, bleeding, alteration, splitting, or other modification.

Beardsley's work for Salome used two colors: black ink and white paper. The drawings are now over 125 years old, so the inks have faded and the papers have yellowed. Compare Harvard's scan of Salome on a Settle (left) with the cleaned-up black and white version I created (right):

Salome on a Settle

I believe my version is more representative of Beardsley's work than the original has become. I'm certain it would look better on white dinnerware. However, the Harvard Art Museums' licensing provisions preclude color correction. They also prohibit removing the border present in Beardsley's drawing. That's problematic for my tableware, because I want to use the rim of a dinner plate as the frame around the drawing instead of the border that Beardsley drew.

Harvard's licensing terms surprised me, because Aubrey Beardsley died in 1898. All his drawings are in the public domain, at least in the United States. Such works are free of copyright restrictions. My understanding is that they can be used by anyone in any way.

Of course, I don't want to use Beardsley's actual drawings on my dinnerware. I want to use scans of them. Enter the lawyers. If a drawing is in the public domain, is a scan of it also in the public domain? Or is creation of the scan tantamount to creation of a new work of art that's protected by its own copyright? 

I'm not so paranoid to think that Harvard (or any other owner of a Beardsley drawing) will sic their legal team on me if I adorn dishes for my personal use with scans of their artwork. Nevertheless, I've been the beneficiary of copyright protection for the books I've written, so I try to respect the rights of others. Furthermore, I find the legal question interesting. What does it take to make a newly-created image worthy of copyright?

More than simply scanning something else, as it turns out, at least in the United States. In Bridgeman vs. Corel, the court ruled that slavish copies of two-dimensional works fail to contribute the necessary originality needed to quality for copyright protection. (The legal landscape may be different when 3D objects such as sculptures are involved.) Practically speaking, scans of 2D artworks in the public domain are themselves in the public domain. Harvard and other institutions may attempt to impose more restrictive licenses, but it's unlikely they'd survive a legal challenge.

Rather than petition Harvard for higher-resolution scans (and possibly get embroiled in a licensing dispute), I shifted my search from scans of their drawings to scans of prints made from them. Such prints are in many museums, because they were part of Wilde's book, Salome. If you have a copy of the book, you have copies of Beardsley's drawings.

In the end, most of the images I used came from the Princeton University Art Museum. When I first looked at their scans, the majority had significantly higher resolution than I was able to find elsewhere--nearly six times the resolution of those at Harvard. However, a few scans had relatively low resolution. I wrote to ask about these suspect images. Princeton confirmed the anomalies and said they'd scan again. When they posted the updated imagery a few weeks later, I was surprised to discover that they'd rescanned all the Salome prints, not just the ones with low resolution. The new resolution was nearly twice the high resolution they'd had before! Thank you, Princeton Art Museum!

These scans had the best resolution I was able to find, but my goal wasn't maximum resolution, it was highest overall quality for use on tableware. Image resolution was only one consideration. A second consideration was whether a scan was of an original Beardsley drawing or of a print. Originals were preferred. A third was fidelity to the original. Some scans show more details than others. Consider these two versions of a portion of The Toilette of Salome II:

Detail from The Toilette of Salome II

The one on the left is from the Princeton Art Museum and is a scan of a print. The one on the right is from the British Museum and is a scan of the original drawing. The Princeton scan is at about 1100 dpi, while that of the British Museum is at only about 280 dpi, but you can see that there are more details in the scan from the British Museum. For this image, I chose the scan from the British Museum.

I ultimately availed myself of scans from the Princeton University Art Museum, the Princeton University Library, the British Museum, B at Flickr, and Alamy. I paid Alamy a small fee for use of one image. Everything else was free.  

Acquisition of the images spanned several months. Some of that time was spent in the legal cul-de-sac of usage restrictions on scans of public domain works, some elapsed as various museums took their time responding to messages I'd sent, and some ticked by as Princeton worked to post new imagery. The dominating activities, however, were searching for and downloading images, examining them for quality and authenticity (some online Beardsley images have been subtly revised), and comparing different scans of the same illustration to find the best versions.

Cleaning the Images

Most scans were of complete book pages. Beardsley's artwork didn't extend to the page edges, so I cropped off the extra space. Some scans were askew, so I straightened them. Then the fun began.

As I noted, Beardsley's drawings (and the prints made from them) are two-color works, but the inks and papers have undergone color shifts since they were created. Scanners pick up color variations, so a scan of a Beardsley drawing will typically yield an image with thousands or tens of thousands of colors. The Princeton University Library's scan of the original drawing for The Black Cape contains a full 117,000 colors. To save space, some institutions store the scans in JPG format, which, as a side effect of reducing the size of a file, may increase the number of colors in the image. 

Recreating the original appearance of a Beardsley drawing required taking images with many colors and transforming them into images with only black and white. The program I used for this (XnView MP) has an option to do just that. Check a box, and the deed is done. Applying that directly to downloaded scans didn't yield very satisfactory results. Image details got lost, and artifacts were introduced. Converting to greyscale before reducing to black and white didn't help. 

What I found worked best was to adjust attributes of the scan, primarily contrast and exposure, before checking the box to reduce the image to two colors. Each scan had to be tweaked individually to get the best results.

Drawings and prints over a hundred years old have spots, smudges, and other imperfections. Dust specks may be present during scanning. Scanners dutifully record all these things. Combined with the fact that the process of transforming colored pixels into black and white ones isn't perfect, it's no surprise that the two-color versions of scans contained errors. Some black pixels should have been white, and some white ones should have been black. I used a simple image editor (Microsoft Paint, of all things, because it's what I had) to flip the colors of such pixels.

Along the way, I fixed "mistakes" in the scanned images. I'd encounter a few black pixels on the face of a character that looked like they shouldn't be there. Or I'd find a white line in a black garment that petered out when it seemed like it should continue. When that happened, I'd consult the original scan to see what was there. If the pixels were still on the face or still faded away in the clothing, yet they still looked wrong, I'd check scans of prints from other institutions to see if they looked the same. If I could, I'd consult a scan of the original drawing. Unfortunately, the low resolution of Harvard's online scans often made answering questions about image details difficult. I spent a lot of time agonizing over pixels and staring at three monitors: one showing the black and white image I was working on, one showing the scan from which that image was derived, and one showing a different scan of the same illustration (ideally a scan of the original drawing). In cases where I wasn't sure what to do, I just made a choice and moved on. I consoled myself with the knowledge that I was working on artwork for dinner plates and cereal bowls, not retouching the Salvator Mundi.

Rework

I cleaned images of most illustrations more than once. When I first came upon the scans at the Princeton Art Museum, their resolution was so good, I stopped looking for anything better. I knew Princeton was working to address the anomalous low-resolution scans I'd reported, but I didn't realize they were rescanning all their Beardsley prints at even better resolution. By the time the new scans were online, I'd cleaned up the ones I'd already downloaded. Unwilling to leave better resolution on the table, I repeated the cleanup work on the new scans.

I eventually realized that highest resolution didn't always equal highest overall quality. I did the work to clean up the Princeton Art Museum's version of The Toilette of Salome II before recognizing that the British Museum's lower-resolution scan retained more detail. So I cleaned it up, too. Similarly, I did preliminary cleanup work on the Princeton Art Museum's scan of a print of The Black Cape before discovering that the Princeton University Library (a separate entity from the Art Museum) offered a scan of the original drawing. The drawing had authenticity on its side, so I stopped working on the scan from the print and shifted my attention to the one from the drawing. 

The Blanks

Interspersed with my work acquiring images was work acquiring the dishes to serve as blanks. I considered square and squarish shapes, with and without texture, before deciding to stick with classic untextured rimmed round plates. 

I wanted a bright white to provide good contrast with the black Beardsley drawings, and I didn't want to try to evaluate shades of white from online photos, so went to the local Bed, Bath, & Beyond (BBB) to look at what they had in stock. Their Neveah White line had a color I liked, it featured pieces in shapes and sizes I found appealing, and, being on clearance, it was attractively priced. I bought what the store had, and I ordered the remainder from BBB online. 

It was amusing to see the pieces trickle in. Oftentimes, plates and bowls arrived in sets of one and two, each from a different store. When I placed the order, I envisioned a shipment from a giant warehouse with shelves of discontinued pieces, not dribs and drabs from stores across the United States. 

Many of the pieces I received were chipped, cracked, or otherwise marred. Perhaps I should have expected that from clearance items. The local BBB took them back without fuss, but the process of ordering the pieces, waiting for them to arrive, having Enduring Images run tests to confirm that they would provide a suitable substrate for their decals, realizing that too many blanks were flawed to proceed, and returning them took several weeks.

It also put me back at Square One on the blank front. Well, almost. I had originally decided not to use the blanks offered by Enduring Images, because they didn't have serving dishes I liked. However, their other pieces were acceptable. The BBB platters and serving bowls were a reasonable color match for the Enduring Images dishware, so I went the mix and match route: plate and bowl blanks from Enduring Images, serving dish blanks from BBB.

Enduring Images' dinner plate blanks were back ordered several months. That was initially frustrating, but it turned out not to matter. A time-consuming fight with PowerPoint was building over the horizon...

Designing Dinner Plates

For my dinner plates, I knew I wanted a common rim design with a unique Beardsley illustration in the middle of each piece. But what rim design? I mocked up more than two dozen variations. I started with black adornments on a white rim, but I soon decided I'd remove the rectangular border present in most of Beardsley's drawings and employ a black plate rim as an ersatz frame. This simple device often disguised that I was using images designed for rectangular pages on plates that were round.

My mockups convinced me that the rim should serve two goals. First, it should act as a black frame that neither competes with nor detracts from the artwork inside. Second, it should convey that the dinnerware is based on Salome. I decided to fulfill these goals by employing a plain black band adorned with a copy of the symbol that Beardsley developed for the cover of Wilde's book.

These are my dinner plate designs:

Because I'll eventually forget which drawing is on which plate, and because the academician in me loves attributions, I decided to put a decal on the underside of each plate identifying the Beardsley illustration on it and the source of the corresponding scan. For these underside decals, I added a splash of color. Beardsley worked with black ink, but everyone familiar with the publication process knew that his drawings could be printed in any color. Oscar Wilde recommended scarlet for the artwork on the book's cover. His publisher ignored this request, but in its honor, I decided to use red for Beardsley's insignia on the bottom of my pieces. This is the front and back of one of the dinner plates:

Designing Smaller Plates

I expected the rim design for the smaller plates to go quickly, because I had an idea for it early on. Peacocks are common in Beardsley's work for Salome, and in The Peacock Skirt, there's a peacock perched on Salome's back that I thought would look great on a plate rim. Here's Beardsley's illustration and a rim mockup:

Sadly, I found that the peacock that worked well as a detail in The Peacock Skirt foundered on its own. It evoked more Dr. Seuss than Aubrey Beardsley. I returned to the drawing board.

Some 20 designs later, I had something I could live with. I again wanted a design that tied the set to Salome, but I also wanted something that would look nice stacked atop a dinner plate. For the Salome tie-in, I ended up using the S from Beardsley's lettering for the book title. This entailed considerably more work than I had anticipated. Creating a high-enough-resolution image of a single letter from a photograph of a book cover in a museum exhibition was, well, let's just say I spent a lot of time fiddling with pixels. Four times I junked what I had and started over.

For the artwork in the center of the smaller plates, I decided to focus on heads and faces. That allowed me to give prominence to details of Beardsley's drawings that are easy to gloss over when viewing his compositions as a whole. It also afforded me the opportunity to use parts of his work that hadn't made the cut for the dinner plates.

Here are my designs for the smaller plates:

I find that the black part of the rim lends the impression that some figures are floating above an invisible horizon. The effect came about by accident. The height of the black is simply the approximate height of the left side of the black background in The Climax when put on a plate. One of my experiments was to use that background as a rim design. The result didn't wow me, but it put me on the path of partially black rims, which ultimately led to the design I adopted.

Designing Bowls

The designs for my plates feature a different image on each plate, but a fixed set of colors (black and white). For my bowls, I flipped this around. They feature a fixed image (the Salome symbol from my dinner plate rims), but each bowl employs a different color. 

Enduring Images warned me that the colors produced by ceramic toners can differ noticeably from what's displayed on a computer screen. They recommended we run a sample tile with the colors I planned to use. It was good advice. Some colors that were easily distinguishable on screen looked nearly identical on the test tile. I adjusted some color choices, we fired another sample, and we were good to go. 

Here are my bowl designs:

Designing Serving Dishes

The serving dishes consist of three platters and two bowls. The platters are rectangular. Two have about the same aspect ratio as Beardsley's drawings. I used full Beardsley illustrations for them, borders and all. One of the platters is narrower, but its shape is a good match for the depiction of Salome in the drawing for the book's list of pictures. I chopped off the rightmost two-thirds of Beardsley's illustration, and the result fit the platter perfectly. Here's Beardsley's List of the Pictures (left) and my platter design based on it (right):

Like the smaller plates, the serving bowls feature details from Beardsley drawings, but this time it's not heads or faces. One bowl (left) shows the powder brush from Cul-de-Lampe. The other (right) shows the lily sprouting from the blood of John's severed head in The Climax:

Modifying Beardsley's Artwork

I think Beardsley did fabulous work for Salome. I devoted a great deal of time to the creation of faithful two-color versions of his drawings, but that merely got me to the dishware design starting line. Beardsley targeted rectangular spaces (book pages), while I was designing for circular objects (plates or bowls). This often put us at odds. I felt no compunction about removing elements of Beardsley's pictures in the service of designs I found more attractive. A good example is my treatment of the drawing, John and Salome. On a (rectangular) serving platter (left), I used it exactly as Beardsley drew it, but on a (circular) dinner plate (right), I removed both the border and some horizontal lines:

Different renditions of John and Salome

If this puts you off, perhaps you'll feel better when I note that for dishes with modified drawings, the attributions on the undersides indicate that that is the case. 

Sins of removal are nothing compared to their artistic antipode: sins of augmentation. In some cases, I added elements to Beardsley's illustrations that he never drew! 

This should horrify you. It horrified me. Showing the truth and nothing but the truth, but not the whole truth (i.e., omitting part of an illustration) is dodgy enough. Showing things that aren't true at all is vastly worse. Yet still I put posthumous pixels on Beardsley's pen. There simply were cases where I felt that a lot of Beardsley and a little of me made more visual sense than Beardsley all by himself.  

Consider the caricature of Oscar Wilde in The Woman in the Moon. I wanted to use it as one of the faces on my smaller plates, but it's tucked into the corner of a drawing, and its rectangular shape is a bad fit for the circular region I needed to fill. So I extrapolated what Beardsley drew until I had what I desired. Compare his work (left) with what I turned it into (right):

I did more than just add ink around the edges of Beardsley's creations. One of Salome's most striking images is John's head on a platter. Setting aside why anybody would want to eat off a plate with that on it, I was determined to use it as part of  the "faces and heads" theme for my smaller plates. Unfortunately, Salome is handling the head in that illustration (The Dancer's Reward), and if you're looking only at the part of the picture showing John's head, Salome's hands are a distraction. I removed them. That left gaps in the drawing, so I filled them in. Compare Beardsley's work (left), the same thing with a first cut at removing Salome's hands (middle), and my final image (right):

Not a pixel I added is worthy of Aubrey Beardsley, but I'm happy with the result. The artistic blasphemy doesn't bother me.

PowerPoint as Hotel California

To design plates, you don't need fancy software--at least not the way I do it. If you can draw a circle and put an image in the middle, you're most of the way there. Add the ability to perform basic shape and image manipulation (e.g., crop and rotate, add and subtract shapes, set a transparency color), and you're set. PowerPoint (PPT) can do all that and, unlike proper graphics programs like Photoshop and GIMP, I had it and knew how to use it. So I did.

This was a colossal mistake. However, even in retrospect, I don't think it's one I should have foreseen. There were no hitches as I designed my pieces. It was only when I went to generate PDF for delivery to Enduring Images that I ran into trouble.

PPT directly supports the generation of PDF, but, by default, it reduces the resolution of the contained images to 200 dpi. Installing and using PDF printers bumps that up to 220. Setting the right combination of program options (some of which must be set before you save your work the first time) can push this to 300. That's it. If there's a way to go higher, I wasn't able to find it, and I spent a lot of time looking.

I'd worked very hard to acquire, clean up, and design with scans of Beardsley's artwork at at least 600 dpi. Enduring Images can print at up to 1200 dpi. I didn't want to throw such resolution away because of some ridiculous PPT limitation. It was easy to demonstrate that, once I'd enabled the proper combination of options, PowerPoint files contained the images I was using at their full resolution. It was equally easy to show that PDF had no trouble containing high-resolution images. So PPT allowed me to import high-resolution images, and it allowed me to work with them inside the program, but if I wanted to generate PDF, it insisted I settle for 300 dpi or less.

That got my hackles up. "Fine," I thought, "I'll find another way. There's more to life than PDF." I set my sights on TIFF. One program upgrade, a registry hack, and an extrapolation of said hack later, I was generating 600-dpi TIFF images of my designs and patting myself on the back. "Take that, PowerPoint!," I gloated. "I wanted 600 dpi, and I've got it!"

Let us recall the story of Tithonus, whose request for immortality was granted, but who failed to ask for eternal youth to go with it. Although he never died, he grew ever older and more infirm. Not what he had in mind. The PowerPoint parallel is that although I found a way to coax 600 dpi designs out of the program, I took fidelity for granted. Such naiveté! The TIFFs PPT created at a sparkling 600 dpi didn't contain the images I'd imported and that PPT stored. They contained modified versions of those images. In particular, they'd had anti-aliasing applied to them. This had the effect of taking my carefully-prepared, high-contrast, two-color images and softening the edges by adding colors. It undid a key part of the the cleanup work I'd performed on the scans I'd downloaded. Definitely not what I had in mind.

I couldn't help but think of the line from Hotel California: "You can check out any time you like, but you can never leave." High-resolution images can go in to PowerPoint, but once they're part of a design, they can't come back out.

My track record for getting software to do what I want is pretty good. It's generally just a matter of putting enough time and energy into it. Not in this case. PowerPoint beat me. I got to the 99½ yard line, but I couldn't get the ball into the end zone. Ten months into the project, I realized I should have gone with something like Photoshop or GIMP, after all.

When in Doubt, Farm it Out

From the perspective of a graphics professional, my designs are laughably simple. My dinner plates, for example, are just a black ring with a picture in the middle and another one on the ring. Enduring Images had explained how they would take my artwork, import it into Photoshop, and use that to print the decals. Because my designs were so simple, I suggested that Enduring Images take my high-resolution, un-anti-aliased images and my design mockups and create the final artwork directly in Photoshop. That would bypass PowerPoint and obviate the need for me to learn a real graphics program. To my relief, they agreed. 

The Ending is Pending

That's where things stand now. Enduring Images has my designs and my high-resolution artwork, and they have the blanks on which to print them. Soon they'll produce a complete dinner plate for my examination. In principle, they could run everything, but just when you think nothing could possibly go worng (not a typo--look it up), something does. Better to find out on one piece than on 41.

There may be additional bumps down the road, but I'm confident we'll get past them. Later this year, I expect to be the proud owner of what will probably be the world's only Beardsley Salome dinnerware set. It will be 33 years after Karen and I visited the Beardsley exhibition at Harvard, and it will be more than a year after I first wrote Enduring Images about custom tableware, but the key thing is that it will be. When it is, I'll post again and let you know how things turned out.



Sunday, August 21, 2022

Old Magazines need New Home

I've decided that the shelf space I currently devote to old technical magazines can be put to better use. That means the old magazines need a new home. Otherwise they'll go in the recycling bin. 

The magazines in question are Doctor Dobb's Journal from March 1996 through February 2009 and Embedded Systems Programming (later Embedded Systems Design) from March 1997 through May 2012. 

If you'd like to save some quarter-century-old magazines and their compatriots from being turned into paper fiber, let me know: smeyers@aristeia.com.


Friday, April 29, 2022

Image Search and Google Earth for Identifying Pictures

For more than 50 years, my parents used slide film to record family memories. I recently had their 1500+ slides digitally scanned, and I've been working to organize the resulting files. At the outset, the job seemed pretty straightforward. My father had carefully labeled the boxes and trays the slides were in: "High Rock 1953", for example, or "Europe 1978".

However, when I peeked at the slides in a tray labeled "Hawaii" and found dozens of pictures of my sister in her infancy, I knew I was in for more than I had expected. When my father confidently proclaimed that the little girl in a slide I showed him was my cousin when I knew it was my sister, I realized I was on my own.

Among the slides in a set labeled "Jeff Park 1956" was this mountain shot:

I've been to Jefferson Park several times, and the view from there doesn't look like this. I guessed that the slide had been mis-filed. 

One of the wonders of the Internet, in my view, along with song recognition, speech-to-text capability, and worldwide navigation, is image search, whereby search engines return images similar to one you provide. I use this page to perform image searches at Google, Bing, and Yandex simultaneously, because sometimes I get the best results from each of those sites. In this case, Bing volunteered that the picture looked like Mt. Shuksan, and images found by Google and Yandex agreed. 

Mt. Shuksan is nearly 300 miles from Mt. Jefferson.

 Another slide in the "Jeff Park 1956" collection was this one:

  
I suspected that it was related to the picture of Mt. Shuksan, but I wanted to be sure. I searched for images taken from the top of Mt. Shuksan, but I didn't find anything that looked like this. Then I used Google Earth (another Internet wonder) to virtually plop myself on top of the mountain and look around. When I did, I found this essentially perfect match:

So the picture was, indeed, taken from the summit of Mt. Shuksan. 

From time to time, the power of the Internet really amazes me. Image search made it possible for me to identify an incorrectly labeled mountain, and Google Earth allowed me to determine that a picture I guessed was taken from the top of that mountain over 60 years ago was truly taken there.


 

Thursday, March 10, 2022

Buffy at 25

25 years ago today--March 10, 1997--Buffy the Vampire Slayer debuted on TV. I didn't start watching until a few years later, but it quickly became my favorite TV show. At some point it morphed into my favorite TV show ever. Now I call it the best TV show ever. I suspect I'll always feel that way.

The twenty-fifth anniversary of Buffy got me thinking about, well, years, and in particular about the differences in ages between the characters on the show and the performers who played them. Rivers of words have been devoted to the show and the characters and the people behind the episodes, not to mention the meaning of it all, but I've seen only passing references to the fact that, for example, when the show first aired, 26-year-old Charisma Carpenter was playing high school sophomore Cordelia Chase. Our first few times through the series, my wife and I completely bought Carpenter as a fifteen- or sixteen-year-old, but now that we know she was a decade older, she looks less high schoolish to us. That doesn't mean she's less good in the role. She's still a great Cordelia. It's impressive that she wasn't just portraying somebody ten years younger than she was, she was playing a character 40% younger than her years. That can't be easy. 

It was similar for Nicholas Brendon as Xander. He was also about 10 years older than his character. Even today watching him in Buffy Season 1, I have no trouble seeing him as a sixteen-year-old boy. I'm not sure what that says about him. Or me.

At the other end of the spectrum are Mercedes McNab as Harmony, who at the time of Buffy's debut was a high school junior playing a high school sophomore, and Michelle Trachtenberg as Dawn, a fifteen-year-old playing a fourteen-year old at the time she joined the show.

Here's some information I compiled on ages of Buffy characters and performers once I got it into my head to look this stuff up. I apologize for it being in the form of an image instead of a table, but I couldn't find an easy way to convert an Excel spreadsheet into a decently-formatted HTML table.


Tuesday, January 25, 2022

Image Metadata: The Metadata Removal Problem

 This is part 6 of my series on metadata for scanned pictures.

Part 1: The Scanned Image Metadata Project

Part 2: Standards, Guidelines, and ExifTool

Part 3: Dealing with Timestamps

Part 4: My Approach

Part 5: Viewing What I Wrote

Part 6: The Metadata Removal Problem (this post)


 

When I embarked on this project, I knew it'd be a challenge to figure out how to put metadata into image files. I expected that some programs would be better than others at showing the metadata I'd put in. But I didn't realize I'd have to contend with programs that silently strip metadata when you ask them to do something completely different. Caroline Guntur's blog post opened my eyes:

Many cloud platforms and social media sites will not upload, or retain the [metadata] in your photos. Some will even strip the information completely upon download.

So I can upload an image file with metadata, but the uploaded file might not have it. Or I can download a file with metadata, but the downloaded file might not have it. Ouch!

I shouldn't have been surprised. Especially on social media sites, photo metadata has acquired a reputation as a security and privacy risk. The GPS coordinates for where a photo was taken (typically included in the metadata by cell phones) have drawn particular attention. Some sites have responded by removing most or all metadata from uploaded images (sometimes while keeping it for their own use). That has drawn the ire of many photographers, who have been understandably unhappy about having, among other things, their embedded copyright notices removed from their pictures.

It got me to wondering: if uploading and downloading images may affect their metadata, what about other ways of moving files around? Is email safe? Texting? I decided to do some poking around.

I looked into two basic scenarios:

  • Upload/Download: Is metadata maintained in image files that are uploaded to a web site or cloud service and then downloaded? This scenario covers social media sites like Facebook, Instagram, and Twitter, as well as cloud storage platforms from Google, Apple, Amazon, etc.

  • Point-to-point Communication: Is metadata maintained in images sent via email, texting, or instant messaging (e.g., WhatsApp and Facebook Messenger)? And what about Airdrop, Apple's close-range wireless mechanism for transferring files from one device to another?

Upload/Download Scenarios

IPTC is not just the name of a metadata standard. It's also the abbreviation for the organization that created it: the International Press Telecommunications Council. Among its activities is looking out for the intellectual property rights of its members. One of the ways it does that is by checking how well a variety of web sites adhere to the IPTC's request that metadata in uploaded image files be left intact. Every three years since 2013, the IPTC has tested a variety of sites to see whether they retain four fields the IPTC considers particularly important: Caption/description, Creator, Copyright Notice, and Credit Line ("the 4Cs"). The latest results (from 2019) cover 16 sites and are here. I encourage you to read the report (it's not long), but the highlights are that "good" sites (i.e., those retaining the 4Cs) include Flickr, Google Photo and Drive, Dropbox, and Microsoft OneDrive. The "bad" sites (i.e., those not retaining the 4Cs) include Instagram, Facebook, and Twitter.

The IPTC's test results are interesting, but they're silent regarding the retention of the two timestamps I care about ("when taken" and "when scanned"), and they have nothing to say about  Apple's iCloud, which I think is a serious omission. I decided to do some testing of my own.  

It's useful to distinguish sites whose primary purpose is storage and accessibility from those whose primary purpose is sharing. Google Photos and Apple iCloud Photos, for example, push themselves as services that let you securely store your photos (and videos) in the cloud and have them accessible from all your devices. They support sharing photos with others, but that's not their primary purpose. You could easily make use of these services without ever sharing anything.

In contrast, the primary reason to upload photos to social media services like Facebook, Instagram, and Twitter, is to share them with others. The purpose of uploading photographs is for other people to see them.

Sites for Storage and Accessibility

I uploaded an image file to the following services, then I downloaded it and checked to see if the Exif, IPTC, and XMP copies of the four fields I use (description, copyright, "when taken", and "when scanned") remained intact. My findings were consistent, both with one another and with the results of the IPTC's testing:

  • Google Photos: All my metadata was preserved.
  • iCloud Photos: All my metadata was preserved.
  • Google Drive: All my metadata was preserved.
  • iCloud Drive: All my metadata was preserved.
  • Microsoft OneDrive: All my metadata was preserved.
  • CrashPlan for Small Business: All my metadata was preserved.

This is reassuring. Storing an image file in cloud storage is unlikely to change its metadata. This is good news for those of us who believe in cloud-based backups.

My experiments were based on the default behavior for these sites, and I suspect that's the case for the IPTC's, too. According to Consumer Reports, Flickr can  be configured to omit metadata when images are downloaded, and it's possible that the same is true of other storage and accessibility sites. However, anybody who configures a site to omit metadata in downloaded images is hardly in a position to complain if images downloaded from that site lack metadata.

Sites for Sharing

Social media sites such as Facebook and Twitter are perhaps the best known sharing-oriented web sites, but the umbrella over such sites is broader than that. Also covered are dating sites (e.g., Tinder and eHarmony), for example, as well as sites for selling things (e.g., eBay and craigslist). 

I didn't test how these sites handle image metadata, because others (e.g., Consumer Reports and Kapersky, in addition to the IPTC) have covered this ground better than I could. They've all come to the same conclusion: social media and other sharing-based sites typically remove metadata from uploaded photographs. 

Social media and other sharing-based sites are a poor choice if you want to share not just pictures, but also their metadata.

Point-to-Point Communication

The point-to-point communication mechanisms I considered are email, texting and instant-messaging, and Apple's Airdrop. I did little experimentation of my own, because this terrain has also been well explored by others.

On the email front, the consensus is that image files sent via email retain their metadata. I did a few simple tests, and my results showed the same: metadata was preserved.

Email can contain images either inline (i.e., displayed in the message itself) or as attachments. In 2020, Craig Ball published a blog post describing how inline images in email appeared to have no metadata, while attached images did. His investigation revealed that the inline images he received did, in fact, contain all the metadata in the images that had been sent, but the metadata somehow got stripped during the process of saving an inline image as an independent file. The blog post went on to explain how to work around the problem.

To see if I could reproduce his results, I emailed an image to myself twice, once as an attachment and once as an inline image. In both cases, I was able to see the metadata without any trouble. However, the email client I used was Thunderbird, whereas Ball used Gmail and Outlook. That could explain why we experienced different behaviors.

It's comforting that Ball's conclusion aligns with the consensus that images sent via email retain their metadata. At the same time, it's disturbing that extracting an inline image from a message may cause its metadata to be removed. Sigh.

But that's email. These days, more photos are probably sent by text or instant message. How does image metadata fare when communicated in those ways?

On the instant-messaging front, things are clear. I didn't run any tests myself, because the net community speaks with a single voice:

  • WhatsApp removes image metadata.
  • Facebook Messenger removes image metadata.
  • Signal removes image metadata.
  • Telegram removes image metadata.

There are ways to work around this behavior (e.g., by sending photos as documents), but the fact remains that these instant-messaging services redact photo metadata as a matter of policy.

When we shift from instant messaging to good, old-fashioned, ordinary texting, the air is fogged by the fact that smart phones typically obscure whether you're engaging in good, old-fashioned, ordinary texting. Users of the Messages app on Apple devices, for example, typically communicate with one another via iMessage. iMessage is an internet-based protocol that is quite different from the cell phone system's SMS/MMS technologies (which underlie good, old-fashioned texting). iMessage works only between Apple devices and only when an internet connection is available, so for texting to or from non-Apple devices or when internet access is lacking, the Messages app employs SMS/MMS. The protocol used for a particular sent message is indicated in Messages by the bubble color (blue for iMessage, green for SMS/MMS), but all incoming messages look the same (grey bubble), regardless of whether they were transmitted using iMessage or SMS/MMS.

This means that a text message sent or received using Messages might be a "normal" text (conveyed via SMS/MMS), but it might be an iMessage text, depending on whether the other party (or parties) in the conversation were using Apple devices and whether an internet connection was available. My understanding is that a similar bifurcation exists on Android devices, where the Google Messages app may send and receive messages using either RCS or SMS/MMS, depending on the capabilities of the parties' devices and those of their service providers.

The effect of texting on image metadata appears to be:

  • Photos sent using the iMessage protocol retain their metadata. This is both the wisdom of the net as well as my personal experience. Photos texted between Apple devices arrive with their metadata intact (unless the lack of an internet connection causes Messages to fall back on SMS/MMS),
  • Photos sent using the RCS protocol retain their metadata. It's harder to find information about RCS than iMessage, but the sources I consulted (e.g., here and here) agree on this point. Photos texted between devices running Android should arrive with their metadata intact (provided both sender and recipient(s) are using RCS).
  • Photos sent using SMS/MMS may retain their metadata. This is the scenario that applies to texts between different kinds of devices (e.g., between iOS and Android devices). Most (but not all) Internet sources I consulted said that MMS strips metadata. My favorite overview of the situation is by Dr. Neal Krawetz. His summary is that "the entire delivery process for texted pictures is just one bad handling process after another." I lack the expertise to evaluate the accuracy of his analysis, but it looks quite plausible, and it would explain the varying behavioral descriptions I found elsewhere on the internet. I feel confident in stating that transmitting photos via SMS/MMS might retain their metadata.
Stepping back from the details, we can say that instant messaging apps scrub metadata from photos, and sending photos by text may or may not have it scrubbed. Texting photos between Apple devices is a good bet as regards metadata retention, but it's important to make sure that both sender and receiver see blue bubbles in the Messages app.

The final point-to-point communication mechanism I looked at is Apple's Airdrop. I'd always thought of Airdrop as simply a way to wirelessly copy a file from one Apple device to another, but that's not quite right. A standard file copy entails copying a sequence of bytes from one place to another. What the bytes represent (e.g., a document, an image, the state of a game) is immaterial. The copying program doesn't care what the bytes are for. It just copies them.

Copying an image file in that manner would copy the file's metadata, because the copying program wouldn't care that it's an image file. It would simply copy the bytes, just like it would with a document or a game state, etc. But that's not how Airdrop behaves. By default, metadata is removed from pictures that are Airdropped. This can be overridden by enabling the "All Photos Data" option, but it's a non-sticky setting, so it has to be explicitly enabled each time Airdrop is used to copy images from one device to another. 

Airdrop's "strip metadata by default" behavior makes it less convenient and less reliable for sharing photos with metadata than a simple file-copying program would be.

Conclusion

Once you get metadata into an image file, you don't want to accidentally lose it, either for yourself or for those with whom you want to share it. The safest things you can do with image files (from the perspective of metadata retention) are to upload them to sites designed for storage and accessibility (as opposed to sharing) and to send them via email. The worst things you can do (again, from the perspective of metadata retention) are to upload them to sharing-oriented sites (e.g., social networks) or to text them using instant-messaging services.

Tuesday, January 18, 2022

Image Metadata: Viewing What I Wrote

This is part 5 of my series on metadata for scanned pictures.

Part 1: The Scanned Image Metadata Project

Part 2: Standards, Guidelines, and ExifTool

Part 3: Dealing with Timestamps

Part 4: My Approach

Part 5: Viewing What I Wrote (this post)

Part 6: The Metadata Removal Problem


 

Just because an image file contains metadata doesn't mean that the metadata is visible or recognizable as what it is. Lots of programs can display metadata. Each has its own quirks. I put only four pieces of metadata into my image files, but most of the programs I tested show only some of these. The fields that are displayed may be labeled differently from both the standard names and the names used by the program used to put the metadata into the file. Some programs apply a name from one standard to a field from a different one.

It is, as usual, a mess. The closer you look, the messier it gets. I've performed numerous experiments, and the stories I could tell...  

But I won't. The way to deal with the mess is to not look very closely. My goal is to produce image files with metadata that I can share with others. I already know how to view an image's metadata, so the real question is whether other people can see it. 

There's no reason to expect friends and family members, etc., to know anything about Exif, IPTC or XMP. However, they'll know descriptive text or a copyright statement when they see it, and if they see a date and time, they'll assume that's when the picture was taken. If they see another date and time that says something about when the picture was scanned or digitized, they are unlikely to be confused.

Inspired by Carl Seibert's survey of how different programs prioritize Exif, IPTC, and XMP when reading metadata, I examined a dozen programs to see how well they made the metadata visible for my sample side from part 3 (shown at right). Although a couple of the programs are aimed at more serious users, most of the 12 are stock apps that come as part of the operating system. They're the programs likely to be used by people with no special interest in metadata. All of the programs I looked at are free. 

The high-level takeaway is that the most important metadata stored in my scanned image files is pretty accessible for anybody who knows to look for it. Things could be better, but they're not bad. As such, my approach to embedding metadata in image files seems to be reasonable.

I scored each program I looked at on a 10-point scale. Points were awarded as follows:

  • 6 points if the image's metadata description is fully visible. If this requires making a window wider or putting a phone into landscape mode, that's fine. I used this description (from part 4 of this series) for testing:

Tim Johnson's equipment | Taken 7/1992 | Developed 8/1992 | Scanned 35mm slide

  • 3 points if the metadata description is partially visible, but can't be made fully visible. A partially visible description tells the person looking at the picture that descriptive information is present, but it's not as good as showing the entire description.

  • 2 points for showing the date when the picture was taken such that a viewer could reasonably assume that that's what the timestamp represents.

  • 1 point for displaying the copyright notice (even if it's only partially visible).

  • 1 point for showing the date and time scanned in a way that makes it recognizable as what it is.

I weight the description field heavily, because it contains the two most important pieces of metadata: what's in the picture and when it was taken. (Recall from part 3 that the "when taken" field holds only an approximation. The actual "when taken" information is part of the description.) If the description is visible, and especially if it's fully visible, that's all most people need.

I issue a big penalty for programs that engage in what I consider a grossly deceptive practice:

  • -6 points if the image's description metadata is not visible, but the program offers its own description field that, if used, stores the entered information, but not in the image file. In other words, a program loses 6 points if it offers a field that looks like an image's metadata field for a description, but isn't. 

Only one program incurred this penalty. I don't want to give anything away, so I'll just say that it carries a company name that rhymes with "Boogle".

The scores tell only part of the story. 10 means that a program can display all the metadata I store in a recognizable form, but it doesn't mean that getting it to do that is straightforward. For details, read the per-program overviews that follow.

Programs on Windows 10

Of the following six programs, three (Windows File Explorer, Windows Photo Viewer, and the Microsoft Photos App) are included with Windows. The other three (XnView MP, Adobe Bridge, and ExifTool) must be downloaded and installed separately.

Windows File Explorer and Windows Photo Viewer (Score: 6)

These two programs show image metadata the same way: on the Details tab of a file's Properties dialog. This dialog displays a limited-width view of the description (3 points) and copyright (1 point), as well as the "when taken" timestamp (2 points). There's no timestamp for when the image was scanned. The fact that the description is displayed twice and is labeled both Title and Subject is strange, but both fields are in the Description section of the tab, so I think things are clear enough. 

Both of these programs ship with Windows 10, but my understanding is that Photo Viewer is hidden in some installations in favor of the Photos app. From a metadata point of view, that's a big step backwards, as we'll see next.

Photos App (Score: 2) 

Clicking on "..." and selecting "ⓘ File Information" when viewing a photo in the Photos app brings up a panel with metadata information. Of the four fields I write into image files, only when the photo was taken is displayed (2 points). This is disappointing for a dedicated photos app, and it's notably worse than Windows Photo Viewer, which is the program the Photos app replaced.

XnView MP (Score: 10)

XnView MP is my default image viewer, and that was the case before I started worrying about metadata. Its score of 10 indicates that it shows all the information I put into image files, but the plethora of metadata viewing options takes some getting used to. 

Everything starts with the Edit menu, which includes entries for "Edit comment...", "Edit IPTC...", and "Edit XMP...". For purposes of viewing metadata, none of these is correct. What you want is "Properties..." (also on the Edit menu). Selecting it brings up a window with multiple tabs, including one for each of Exif, IPTC, XMP, and ExifTool.

The Exif tab does the best job of showing all the metadata I embed, with each of the four fields clearly labeled and near the top of the window. On its own, this tab scores a 10.

The IPTC-IIM tab also shows all the fields, but the timestamp for when the image was scanned is unrecognizable unless you know that the hexadecimal codes for the relevant timestamp fields are 0x3e and 0x3f. No "normal" person would know that, so the IPTC tab loses the point for showing the date/time scanned and ends up with a 9. 

The XMP tab shows everything, but I'd expect the similarity of the names for the "when taken" and "when scanned" fields (DateCreated and CreateDate) to sow confusion and uncertainty. I give the tab credit for neither, and it gets a 7.

The ExifTool tab shows the results of running the copy of ExifTool that's embedded inside XnView MP. The amount of information can be overwhelming, but everything's there. It's there three times, in fact, once each for Exif, IPTC, and XMP. Taken by itself, the ExifTool tab scores a 10, but the Exif tab remains the easier way to get the information.

Adobe Bridge (Score: 10)

Bridge is Adobe's free companion to Photoshop and Lightroom. It's designed to organize and manage photos, not to change their appearance. Using Bridge, you can view and edit metadata, but you can't change what a picture looks like. 

It's reasonable to expect people who use Bridge to have an above-average familiarity with image metadata.

Bridge's metadata panel is divided into several sections, including ones for Exif, IPTC IIM, IPTC Core, and IPTC Extension. XMP appears to be missing until you recall (from part 2) that IPTC Core and IPTC Extension are sometimes used synonymously with XMP. No single section shows all the fields I write, but everything is present: the IPTC-IIM and IPTC Core sections have the description, "when taken" timestamp, and copyright notice, and the Exif section has the "when scanned" timestamp.

ExifTool  (Score: 10)

ExifTool is a command line program, though GUIs have been built on top of it. It's the go-to power tool in the image metadata world, and it didn't take me long to regard it as the source of truth for metadata in image files. Different programs label the metadata they show in different ways, so when you look at a field value, it can be hard to know exactly what you're looking at. Some programs lie. The Preview App on MacOS, for example, has tabs for Exif and IPTC, but there are conditions under which the values on those tabs come from XMP! Since metadata in image files can be seen only with the aid of programs that know how to read it, how do you know which programs to trust? I trust ExifTool.

It's hard to imagine anybody using ExifTool without knowing about Exif, IPTC, XMP, and the various fields they offer. I therefore score ExifTool with the expectation that it's being used by somebody who brings a fair amount of metadata knowledge to the table. Such users can be expected to recognize the difference between DateCreated and CreateDate. With that in mind, ExifTool scores a 10.

ExifTool's output on the sample slide is an unwieldy 96 lines long if you let it show you everything (which is the default), but if you ask it for only the fields I put into it,

exiftool -S
         -mwg:description

         -mwg:copyright

         -mwg:datetimeoriginal

         -mwg:createdate

         '.\The Brown Experience 1985-1993 031.jpg'

you get this in return:

Description: Tim Johnson's equipment | Taken 7/1992 | Developed 8/1992 | Scanned 35mm slide
Copyright: © 2022 Scott Meyers (smeyers@aristeia.com), all rights reserved.
DateTimeOriginal: 1992:07:01 00:00:00
CreateDate: 2022:01:14 17:54:46

The copyright symbol (©) is displayed incorrectly, but that's a problem with Windows PowerShell (where I ran the command), not ExifTool.

Programs on MacOS Big Sur

Each of the three programs I tested on MacOS is included with the operating system.

Finder (Score: 6)

Right-clicking on an image file in the Finder and choosing "Get Info" brings up this window:

It shows the full description in the metadata (6 points), but though timestamps are shown for when the file was created and last modified, there is no sign of the "when taken" and "when scanned" timestamps. The copyright notice is similarly missing. The Finder thus gets a score of 6.

Photos App (Score: 8)

Clicking the ⓘ while viewing a photo in the Photos app brings up its Info window:

It shows the full description (6 points) as well when the photo was taken (2 points), but the "when scanned" timestamp and the copyright notice are not shown. The score for the Photos app is 8.

Preview App (Score: 10)

Viewing image metadata with the MacOS Preview app reminds me of using XnView MP, but with a twist. With XnView MP, the Exif tab shows metadata from the Exif fields, and the IPTC tab shows metadata from the IPTC fields. That's not always the case with the MacOS Preview app. Regardless of how a tab is labeled, it may show metadata drawn from Exif, IPTC and XMP. That's disturbing, but, fortunately, irrelevant for my purposes. Writing the same metadata to corresponding fields in Exif, IPTC, and XMP means that it doesn't matter which field gets read. The Preview app's Exif tab, for example, shows when the photo was taken and when it was digitized (i.e., scanned). This information is correct for my image files, although it's actually pulled from the IPTC metadata instead of that for Exif.

On its own, this tab gets a score of 3: 2 for the date/time when the picture was taken, and 1 for when it was scanned.

The IPTC tab shows everything and thus gets a 10, though I take a dim view of the decision to display the date and time digitized between the date taken and the time taken:

The Preview app also has a TIFF tab. I don't know what kind of metadata this tab is supposed to show, but since all the tabs can show metadata from Exif, IPTC, and XMP, the labels don't really matter. Here's the TIFF tab for the sample slide. It shows the full description (6 points) and the copyright notice (1 point). The value it shows for the "Date Time" field corresponds neither to when the photo was taken nor to when it was scanned, so no points for that. The tab gets a score of 7.

The more I use the Preview app to look at image metadata, the less I like it. It right-justifies field names with respect to the center of the window, and it left-justifies field values with respect to that center, and, as you can see, this leads to a lot of wasted space on the left side of the window. I've often found that widening the window doesn't cause the text inside to be reformatted, so I've had to play games to get all the metadata properly displayed (e.g., force-close the app and then reopen it).

Programs on iOS 15

Photos App (Score: 8)

As of iOS 15, touching the ⓘ icon or swiping up while viewing an image displays the Info pane, which includes the image's full description (6 points) and the date and time it was taken (2 points). There's no sign of the copyright or "date scanned" metadata, so this app gets an 8.

Prior to iOS 15, accessing an image's metadata typically involved saving the image to the Files app, then using the Files app to view the embedded metadata. That continues to work on iOS 15, but it's more cumbersome, and my experience is that even though it displays more metadata fields than the Photos app's Info pane, it doesn't show any of the fields I write to my scanned image files. It would get a score of 0 if I officially evaluated it, but since I'm running iOS 15, I'm going to pretend I know nothing about the Files app workaround.

Google Photos App (Score: -4)

I'm generally impressed with Google's products and services, but the impression its iOS Photos app leaves on me is a depressing mixture of disbelief and anger. 

Pressing "..." while viewing a photo brings up its Info sheet:

It shows the "when taken" timestamp (2 points), but there's no sign of the "when scanned" timestamp, the copyright notice, or the description. Instead, there is an "Add description..." field, which, being empty, suggests that the image lacks a description. For my files, this is not just untrue, but triply untrue, because my scanned image files have description metadata in each of the Exif, IPTC, and XMP fields. As a company, Google knows this, because Google Photos in the cloud (see below) displays the embedded description. 

But that's not the heinous part. Should you, noting the the empty description field, succumb to temptation and put information into it, your text will not be stored in the metadata in the image file! Instead, the information you enter will be stored separately by Google. The same is true of any other edits you make on the Info sheet, e.g., "Add a location" or "Edit date & time". The Info sheet is a place to enter image metadata, but it's not a place to enter image metadata that will be stored inside the image!

This is reprehensible behavior. Hiding metadata present in a image while offering users the chance to add metadata that you'll keep private is...well, words fail me. But math doesn't. I slap on the -6 penalty for grossly deceptive practices, and Google's Photos app for iOS ends up with a record-setting low score of -4.

Cloud Services

There are lots of cloud-based photo storage services. I tested only Google Photos and iCloud Photos, and to be clear, I did it via their web browser interface, not via an app on a computer or mobile device. Among the many services I did not test are Facebook, Flickr, SmugMug, Amazon Photos, Microsoft Onedrive, Degoo, and photobucket. I welcome your comments about viewing image metadata using these services.

In a 2017 blog post, Caroline Guntur wrote,

Many cloud platforms and social media sites will not upload, or retain the [metadata] in your photos. Some will even strip the information completely upon download.

In a later post in this series, I will address what happens to metadata when you move image files around (e.g., upload or download them, email them, text or IM them, etc.). My testing shows that uploading an image to both Google Photos and iCloud Photos has no effect on its metadata--at least not for the four fields I care about. 

Google Photos (Score: 8)

Clicking the ⓘ symbol while viewing a photo opens its Info panel. That panel displays the full metadata description (6 points) as well as the "when taken" timestamp (2 points). The copyright and "when scanned" fields are missing, so the Google Photos cloud service scores an 8.

Like the Google Photos iPhone app, the Google Photos cloud service displays an inviting "Add a description" field at the top of the panel. As with the iPhone app, metadata you enter here is not stored in the image file, but instead in a Google database. 

Unlike the iPhone app, the description metadata already in the file is shown, albeit with the label "Other." Because Google Photos in the cloud displays the description metadata embedded in the file, there's less chance the person viewing the photo will think there's no description for it and will avail themselves of the "Add a description" field. I therefore withhold the six-point penalty here that I impose on Google's iPhone app.

iCloud Photos (Score: 2)

As far as I can tell, the only metadata visible for a photo viewed using the web browser interface to iCloud Photos is the date on which it was taken. It's displayed above the photo being viewed:

That yields a disappointing score of 2. Apple's apps on MacOS and iOS do notably better, and my impression from looking at Apple's support pages is that they expect you to use those apps as much as possible. If you don't have an Apple device, well, presumably that's an incentive for you to get one.