<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic JPEG chroma subsampling in Camera Software</title>
    <link>https://community.usa.canon.com/t5/Camera-Software/JPEG-chroma-subsampling/m-p/572744#M24182</link>
    <description>&lt;P&gt;When saving a JPEG from Gimp, I use chroma subsampling of 4:4:4 instead of 4:2:2. I read a paper in Nature that reinforces my guess. Canon DPP software and Canon cameras always use 4:2:2 when creating a JPG file, but in DPP one may save a 16 bit TIF file and use Gimp to save it with 4:4:4 and also a newer JPEG compression algorithm than the camera or DPP use so that the file will be smaller. The new Nature paper contradicts the traditional wisdom.&lt;/P&gt;
&lt;P&gt;A traditional definition is at&amp;nbsp;&lt;A href="https://en.wikipedia.org/wiki/Chroma_subsampling" target="_blank"&gt;https://en.wikipedia.org/wiki/Chroma_subsampling&lt;/A&gt;&amp;nbsp;: "&lt;/P&gt;
&lt;P&gt;&lt;STRONG style="color: #202122; font-family: sans-serif; font-size: 16px; font-style: normal; font-variant-ligatures: normal; font-variant-caps: normal; letter-spacing: normal; orphans: 2; text-align: start; text-indent: 0px; text-transform: none; widows: 2; word-spacing: 0px; -webkit-text-stroke-width: 0px; white-space: normal; background-color: #ffffff; text-decoration-thickness: initial; text-decoration-style: initial; text-decoration-color: initial;"&gt;Chroma subsampling&lt;/STRONG&gt;&lt;SPAN&gt;&amp;nbsp;is the practice of encoding images by implementing less resolution for&amp;nbsp;&lt;/SPAN&gt;&lt;A style="text-decoration: none; color: #3366cc; background: none #ffffff; border-radius: 2px; overflow-wrap: break-word; font-family: sans-serif; font-size: 16px; font-style: normal; font-variant-ligatures: normal; font-variant-caps: normal; font-weight: 400; letter-spacing: normal; orphans: 2; text-align: start; text-indent: 0px; text-transform: none; widows: 2; word-spacing: 0px; -webkit-text-stroke-width: 0px; white-space: normal;" title="Chrominance" href="https://en.wikipedia.org/wiki/Chrominance" target="_blank"&gt;chroma&lt;/A&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;A style="text-decoration: none; color: #3366cc; background: none #ffffff; border-radius: 2px; overflow-wrap: break-word; font-family: sans-serif; font-size: 16px; font-style: normal; font-variant-ligatures: normal; font-variant-caps: normal; font-weight: 400; letter-spacing: normal; orphans: 2; text-align: start; text-indent: 0px; text-transform: none; widows: 2; word-spacing: 0px; -webkit-text-stroke-width: 0px; white-space: normal;" title="Information" href="https://en.wikipedia.org/wiki/Information" target="_blank"&gt;information&lt;/A&gt;&lt;SPAN&gt;&amp;nbsp;than for&amp;nbsp;&lt;/SPAN&gt;&lt;A style="text-decoration: none; color: #3366cc; background: none #ffffff; border-radius: 2px; overflow-wrap: break-word; font-family: sans-serif; font-size: 16px; font-style: normal; font-variant-ligatures: normal; font-variant-caps: normal; font-weight: 400; letter-spacing: normal; orphans: 2; text-align: start; text-indent: 0px; text-transform: none; widows: 2; word-spacing: 0px; -webkit-text-stroke-width: 0px; white-space: normal;" title="Luma (video)" href="https://en.wikipedia.org/wiki/Luma_(video)" target="_blank"&gt;luma&lt;/A&gt;&lt;SPAN&gt;&amp;nbsp;information, taking advantage of the human visual system's lower acuity for color differences than for luminance.&lt;/SPAN&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;...&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;The&amp;nbsp;&lt;/SPAN&gt;&lt;A style="text-decoration: none; color: #3366cc; background: none #ffffff; border-radius: 2px; overflow-wrap: break-word; font-family: sans-serif; font-size: 16px; font-style: normal; font-variant-ligatures: normal; font-variant-caps: normal; font-weight: 400; letter-spacing: normal; orphans: 2; text-align: start; text-indent: 0px; text-transform: none; widows: 2; word-spacing: 0px; -webkit-text-stroke-width: 0px; white-space: normal;" title="Visual perception" href="https://en.wikipedia.org/wiki/Visual_perception" target="_blank"&gt;human vision system&lt;/A&gt;&lt;SPAN&gt;&amp;nbsp;processes color information (&lt;/SPAN&gt;&lt;A style="text-decoration: none; color: #3366cc; background: none #ffffff; border-radius: 2px; overflow-wrap: break-word; font-family: sans-serif; font-size: 16px; font-style: normal; font-variant-ligatures: normal; font-variant-caps: normal; font-weight: 400; letter-spacing: normal; orphans: 2; text-align: start; text-indent: 0px; text-transform: none; widows: 2; word-spacing: 0px; -webkit-text-stroke-width: 0px; white-space: normal;" title="Hue" href="https://en.wikipedia.org/wiki/Hue" target="_blank"&gt;hue&lt;/A&gt;&lt;SPAN&gt;&amp;nbsp;and&amp;nbsp;&lt;/SPAN&gt;&lt;A style="text-decoration: none; color: #3366cc; background: none #ffffff; border-radius: 2px; overflow-wrap: break-word; font-family: sans-serif; font-size: 16px; font-style: normal; font-variant-ligatures: normal; font-variant-caps: normal; font-weight: 400; letter-spacing: normal; orphans: 2; text-align: start; text-indent: 0px; text-transform: none; widows: 2; word-spacing: 0px; -webkit-text-stroke-width: 0px; white-space: normal;" title="Colorfulness" href="https://en.wikipedia.org/wiki/Colorfulness" target="_blank"&gt;colorfulness&lt;/A&gt;&lt;SPAN&gt;) at about a third of the resolution of&amp;nbsp;&lt;/SPAN&gt;&lt;A style="text-decoration: none; color: #3366cc; background: none #ffffff; border-radius: 2px; overflow-wrap: break-word; font-family: sans-serif; font-size: 16px; font-style: normal; font-variant-ligatures: normal; font-variant-caps: normal; font-weight: 400; letter-spacing: normal; orphans: 2; text-align: start; text-indent: 0px; text-transform: none; widows: 2; word-spacing: 0px; -webkit-text-stroke-width: 0px; white-space: normal;" title="Relative luminance" href="https://en.wikipedia.org/wiki/Relative_luminance" target="_blank"&gt;luminance&lt;/A&gt;&lt;SPAN&gt;&amp;nbsp;(lightness/darkness information in an image). Therefore it is possible to&amp;nbsp;&lt;/SPAN&gt;&lt;A style="text-decoration: none; color: #3366cc; background: none #ffffff; border-radius: 2px; overflow-wrap: break-word; font-family: sans-serif; font-size: 16px; font-style: normal; font-variant-ligatures: normal; font-variant-caps: normal; font-weight: 400; letter-spacing: normal; orphans: 2; text-align: start; text-indent: 0px; text-transform: none; widows: 2; word-spacing: 0px; -webkit-text-stroke-width: 0px; white-space: normal;" title="Sampling (signal processing)" href="https://en.wikipedia.org/wiki/Sampling_(signal_processing)" target="_blank"&gt;sample&lt;/A&gt;&lt;SPAN&gt;&amp;nbsp;color information at a lower resolution while maintaining good image quality.&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;"&lt;/P&gt;
&lt;P&gt;The Wikipedia quote above seems to suggest that the 4:2:2 would be more than adequate.&amp;nbsp;&lt;/P&gt;
&lt;P&gt;However, a recent open access paper in Nature says : "&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;It could also be surprising that the foveal resolution limit of red-green patterns is similar to that of achromatic patterns—89 ppd for red-green vs. 94 ppd for achromatic. It must be noted, however, that we did not try to isolate observers’ individual chromatic mechanisms via the heterochromatic flicker paradigm&lt;/SPAN&gt;&lt;SUP style="vertical-align: baseline; font-size: 13.5px; line-height: 0; position: relative; top: -0.5em; box-sizing: inherit; color: #222222; font-family: Harding, Palatino, serif; font-style: normal; font-variant-ligatures: normal; font-variant-caps: normal; font-weight: 400; letter-spacing: normal; orphans: 2; text-align: start; text-indent: 0px; text-transform: none; widows: 2; word-spacing: 0px; -webkit-text-stroke-width: 0px; white-space: normal; background-color: #ffffff; text-decoration-thickness: initial; text-decoration-style: initial; text-decoration-color: initial;"&gt;&lt;A id="ref-link-section-d10595461e557" style="vertical-align: baseline; background-color: transparent; color: #006699; overflow-wrap: break-word; text-decoration: underline 0.0625rem; text-decoration-skip-ink: auto; word-break: break-word; box-sizing: inherit; text-underline-offset: 0.08em;" title="Wagner, G. &amp;amp; Boynton, R. M. Comparison of four methods of heterochromatic photometry. J. Opt. Soc. Am. 62, 1508–1515 (1972)." href="https://www.nature.com/articles/s41467-025-64679-2#ref-CR24" data-track="click" data-track-action="reference anchor" data-track-label="link" data-test="citation-ref" aria-label="Reference 24" target="_blank"&gt;24&lt;/A&gt;&lt;/SUP&gt;&lt;SPAN&gt;, as we wanted to capture data that could generalise across the population. Our results cast doubt on the common practice of chroma sub-sampling found in almost every lossy image and video format, from JPEG image coding to H.265 or AV1 video encoding. The assumption of chroma subsampling is that the resolution of chromatic channels can be reduced twofold in relation to the achromatic channel due to the lower sensitivity of the visual system to high-frequency chromatic contrast. Our data suggests that this only holds for the yellow-violet colour direction, with the maximum resolution of 53 ppd, but not for the red-green direction, consistent with the vision science theory that the isoluminant red-green pathway is the most sensitive opponent-colour channel of the human visual system&lt;/SPAN&gt;&lt;SUP style="vertical-align: baseline; font-size: 13.5px; line-height: 0; position: relative; top: -0.5em; box-sizing: inherit; color: #222222; font-family: Harding, Palatino, serif; font-style: normal; font-variant-ligatures: normal; font-variant-caps: normal; font-weight: 400; letter-spacing: normal; orphans: 2; text-align: start; text-indent: 0px; text-transform: none; widows: 2; word-spacing: 0px; -webkit-text-stroke-width: 0px; white-space: normal; background-color: #ffffff; text-decoration-thickness: initial; text-decoration-style: initial; text-decoration-color: initial;"&gt;&lt;A id="ref-link-section-d10595461e561" style="vertical-align: baseline; background-color: transparent; color: #006699; overflow-wrap: break-word; text-decoration: underline 0.25rem; text-decoration-skip-ink: auto; word-break: break-word; box-sizing: inherit; text-underline-offset: 0.08em;" title="Chaparro, A., Stromeyer III, C. F., Huang, E. P., Kronauer, R. E. &amp;amp; Eskew, R. T. Colour is what the eye sees best. Nature 361, 348–350 (1993)." href="https://www.nature.com/articles/s41467-025-64679-2#ref-CR25" data-track="click" data-track-action="reference anchor" data-track-label="link" data-test="citation-ref" aria-label="Reference 25" target="_blank"&gt;25&lt;/A&gt;&lt;/SUP&gt;&lt;SPAN&gt;.&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;"&lt;/P&gt;
&lt;P&gt;Also, for displaying JPEG images on a screen "&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;We may also want to know how the resolution limit, expressed in ppd units, translates to actual displays and viewing distances. This is shown in Fig.&amp;nbsp;&lt;/SPAN&gt;&lt;A style="vertical-align: baseline; background-color: #ffffff; color: #006699; overflow-wrap: break-word; text-decoration: underline 0.0625rem; text-decoration-skip-ink: auto; word-break: break-word; box-sizing: inherit; text-underline-offset: 0.08em; font-family: Harding, Palatino, serif; font-size: 18px; font-style: normal; font-variant-ligatures: normal; font-variant-caps: normal; font-weight: 400; letter-spacing: normal; orphans: 2; text-align: start; text-indent: 0px; text-transform: none; widows: 2; word-spacing: 0px; -webkit-text-stroke-width: 0px; white-space: normal;" href="https://www.nature.com/articles/s41467-025-64679-2#Fig2" data-track="click" data-track-label="link" data-track-action="figure anchor" target="_blank"&gt;2c&lt;/A&gt;&lt;SPAN&gt;, where we plot the relationship between the display resolution (number of horizontal lines) and the viewing distance (measured in display heights). Our model predictions can be compared with the ITU-R BT.2100-2&lt;/SPAN&gt;&lt;SUP style="vertical-align: baseline; font-size: 13.5px; line-height: 0; position: relative; top: -0.5em; box-sizing: inherit; color: #222222; font-family: Harding, Palatino, serif; font-style: normal; font-variant-ligatures: normal; font-variant-caps: normal; font-weight: 400; letter-spacing: normal; orphans: 2; text-align: start; text-indent: 0px; text-transform: none; widows: 2; word-spacing: 0px; -webkit-text-stroke-width: 0px; white-space: normal; background-color: #ffffff; text-decoration-thickness: initial; text-decoration-style: initial; text-decoration-color: initial;"&gt;&lt;A id="ref-link-section-d10595461e1112" style="vertical-align: baseline; background-color: transparent; color: #006699; overflow-wrap: break-word; text-decoration: underline 0.0625rem; text-decoration-skip-ink: auto; word-break: break-word; box-sizing: inherit; text-underline-offset: 0.08em;" title="ITU-R BT.2100-2. Image Parameter Values for High Dynamic Range Television for Use in Production and International Programme Exchange. Recommendation ITU-R BT.2100-2. 
                  https://www.itu.int/rec/R-REC-BT.2100-2-201807-I/en
                  
                 (International Telecommunication Union, Geneva, 2018)." href="https://www.nature.com/articles/s41467-025-64679-2#ref-CR31" data-track="click" data-track-action="reference anchor" data-track-label="link" data-test="citation-ref" aria-label="Reference 31" target="_blank"&gt;31&lt;/A&gt;&lt;/SUP&gt;&lt;SPAN&gt;&amp;nbsp;recommended viewing distances for television, shown as red horizontal lines in Fig.&amp;nbsp;&lt;/SPAN&gt;&lt;A style="vertical-align: baseline; background-color: #ffffff; color: #006699; overflow-wrap: break-word; text-decoration: underline 0.0625rem; text-decoration-skip-ink: auto; word-break: break-word; box-sizing: inherit; text-underline-offset: 0.08em; font-family: Harding, Palatino, serif; font-size: 18px; font-style: normal; font-variant-ligatures: normal; font-variant-caps: normal; font-weight: 400; letter-spacing: normal; orphans: 2; text-align: start; text-indent: 0px; text-transform: none; widows: 2; word-spacing: 0px; -webkit-text-stroke-width: 0px; white-space: normal;" href="https://www.nature.com/articles/s41467-025-64679-2#Fig2" data-track="click" data-track-label="link" data-track-action="figure anchor" target="_blank"&gt;2c&lt;/A&gt;&lt;SPAN&gt;. Since Full HD (FHD) resolution was not designed to deliver a perfect image, the ITU recommendation of 3.2 display heights falls short of the reproduction below the visibility threshold. Our model indicates that a distance of at least 6 display heights would be necessary to satisfy the acuity limits of 95% of the observers. For 4K and 8K displays, the ITU suggests viewing distances of 1.6–3.2 and 0.8–3.2 display heights, respectively. Our model shows that those ranges are overly conservative and there is little benefit of 8K resolution when sited further than 1.3 display heights from the screen. Used in this way, our model provides a framework to update existing guidelines and to establish new recommendations based on the limitations of our vision. In Fig.&amp;nbsp;&lt;/SPAN&gt;&lt;A style="vertical-align: baseline; background-color: #ffffff; color: #006699; overflow-wrap: break-word; text-decoration: underline 0.0625rem; text-decoration-skip-ink: auto; word-break: break-word; box-sizing: inherit; text-underline-offset: 0.08em; font-family: Harding, Palatino, serif; font-size: 18px; font-style: normal; font-variant-ligatures: normal; font-variant-caps: normal; font-weight: 400; letter-spacing: normal; orphans: 2; text-align: start; text-indent: 0px; text-transform: none; widows: 2; word-spacing: 0px; -webkit-text-stroke-width: 0px; white-space: normal;" href="https://www.nature.com/articles/s41467-025-64679-2#Fig2" data-track="click" data-track-label="link" data-track-action="figure anchor" target="_blank"&gt;2d&lt;/A&gt;&lt;SPAN&gt;, we plot the relation between pixel density (in pixels-per-inch) and viewing distance and show the screen resolution for two different devices. To allow the readers to test their own displays, we created an online display resolution calculator available&amp;nbsp;&lt;/SPAN&gt;&lt;A style="vertical-align: baseline; background-color: #ffffff; color: #006699; overflow-wrap: break-word; text-decoration: underline 0.0625rem; text-decoration-skip-ink: auto; word-break: break-word; box-sizing: inherit; text-underline-offset: 0.08em; font-family: Harding, Palatino, serif; font-size: 18px; font-style: normal; font-variant-ligatures: normal; font-variant-caps: normal; font-weight: 400; letter-spacing: normal; orphans: 2; text-align: start; text-indent: 0px; text-transform: none; widows: 2; word-spacing: 0px; -webkit-text-stroke-width: 0px; white-space: normal;" href="https://www.cl.cam.ac.uk/research/rainbow/projects/display_calc/" target="_blank"&gt;here&lt;/A&gt;&lt;SPAN&gt;.&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;"&lt;/P&gt;
&lt;P&gt;&lt;A href="https://www.nature.com/articles/s41467-025-64679-2" target="_blank"&gt;https://www.nature.com/articles/s41467-025-64679-2&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;Ashraf, M., Chapiro, A. &amp;amp; Mantiuk, R.K. Resolution limit of the eye — how many pixels can we see?.&amp;nbsp;&lt;/SPAN&gt;&lt;I&gt;Nat Commun&lt;/I&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;STRONG style="font-weight: bolder; box-sizing: inherit; color: #222222; font-family: -apple-system, 'system-ui', 'Segoe UI', Roboto, Oxygen-Sans, Ubuntu, Cantarell, 'Helvetica Neue', sans-serif; font-size: 16px; font-style: normal; font-variant-ligatures: normal; font-variant-caps: normal; letter-spacing: normal; orphans: 2; text-align: start; text-indent: 0px; text-transform: none; widows: 2; word-spacing: 0px; -webkit-text-stroke-width: 0px; white-space: normal; background-color: #ffffff; text-decoration-thickness: initial; text-decoration-style: initial; text-decoration-color: initial;"&gt;16&lt;/STRONG&gt;&lt;SPAN&gt;, 9086 (2025). &lt;A href="https://doi.org/10.1038/s41467-025-64679-2" target="_blank"&gt;https://doi.org/10.1038/s41467-025-64679-2&lt;/A&gt;&lt;/SPAN&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
    <pubDate>Mon, 27 Oct 2025 17:46:44 GMT</pubDate>
    <dc:creator>johnrmoyer</dc:creator>
    <dc:date>2025-10-27T17:46:44Z</dc:date>
    <item>
      <title>JPEG chroma subsampling</title>
      <link>https://community.usa.canon.com/t5/Camera-Software/JPEG-chroma-subsampling/m-p/572744#M24182</link>
      <description>&lt;P&gt;When saving a JPEG from Gimp, I use chroma subsampling of 4:4:4 instead of 4:2:2. I read a paper in Nature that reinforces my guess. Canon DPP software and Canon cameras always use 4:2:2 when creating a JPG file, but in DPP one may save a 16 bit TIF file and use Gimp to save it with 4:4:4 and also a newer JPEG compression algorithm than the camera or DPP use so that the file will be smaller. The new Nature paper contradicts the traditional wisdom.&lt;/P&gt;
&lt;P&gt;A traditional definition is at&amp;nbsp;&lt;A href="https://en.wikipedia.org/wiki/Chroma_subsampling" target="_blank"&gt;https://en.wikipedia.org/wiki/Chroma_subsampling&lt;/A&gt;&amp;nbsp;: "&lt;/P&gt;
&lt;P&gt;&lt;STRONG style="color: #202122; font-family: sans-serif; font-size: 16px; font-style: normal; font-variant-ligatures: normal; font-variant-caps: normal; letter-spacing: normal; orphans: 2; text-align: start; text-indent: 0px; text-transform: none; widows: 2; word-spacing: 0px; -webkit-text-stroke-width: 0px; white-space: normal; background-color: #ffffff; text-decoration-thickness: initial; text-decoration-style: initial; text-decoration-color: initial;"&gt;Chroma subsampling&lt;/STRONG&gt;&lt;SPAN&gt;&amp;nbsp;is the practice of encoding images by implementing less resolution for&amp;nbsp;&lt;/SPAN&gt;&lt;A style="text-decoration: none; color: #3366cc; background: none #ffffff; border-radius: 2px; overflow-wrap: break-word; font-family: sans-serif; font-size: 16px; font-style: normal; font-variant-ligatures: normal; font-variant-caps: normal; font-weight: 400; letter-spacing: normal; orphans: 2; text-align: start; text-indent: 0px; text-transform: none; widows: 2; word-spacing: 0px; -webkit-text-stroke-width: 0px; white-space: normal;" title="Chrominance" href="https://en.wikipedia.org/wiki/Chrominance" target="_blank"&gt;chroma&lt;/A&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;A style="text-decoration: none; color: #3366cc; background: none #ffffff; border-radius: 2px; overflow-wrap: break-word; font-family: sans-serif; font-size: 16px; font-style: normal; font-variant-ligatures: normal; font-variant-caps: normal; font-weight: 400; letter-spacing: normal; orphans: 2; text-align: start; text-indent: 0px; text-transform: none; widows: 2; word-spacing: 0px; -webkit-text-stroke-width: 0px; white-space: normal;" title="Information" href="https://en.wikipedia.org/wiki/Information" target="_blank"&gt;information&lt;/A&gt;&lt;SPAN&gt;&amp;nbsp;than for&amp;nbsp;&lt;/SPAN&gt;&lt;A style="text-decoration: none; color: #3366cc; background: none #ffffff; border-radius: 2px; overflow-wrap: break-word; font-family: sans-serif; font-size: 16px; font-style: normal; font-variant-ligatures: normal; font-variant-caps: normal; font-weight: 400; letter-spacing: normal; orphans: 2; text-align: start; text-indent: 0px; text-transform: none; widows: 2; word-spacing: 0px; -webkit-text-stroke-width: 0px; white-space: normal;" title="Luma (video)" href="https://en.wikipedia.org/wiki/Luma_(video)" target="_blank"&gt;luma&lt;/A&gt;&lt;SPAN&gt;&amp;nbsp;information, taking advantage of the human visual system's lower acuity for color differences than for luminance.&lt;/SPAN&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;...&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;The&amp;nbsp;&lt;/SPAN&gt;&lt;A style="text-decoration: none; color: #3366cc; background: none #ffffff; border-radius: 2px; overflow-wrap: break-word; font-family: sans-serif; font-size: 16px; font-style: normal; font-variant-ligatures: normal; font-variant-caps: normal; font-weight: 400; letter-spacing: normal; orphans: 2; text-align: start; text-indent: 0px; text-transform: none; widows: 2; word-spacing: 0px; -webkit-text-stroke-width: 0px; white-space: normal;" title="Visual perception" href="https://en.wikipedia.org/wiki/Visual_perception" target="_blank"&gt;human vision system&lt;/A&gt;&lt;SPAN&gt;&amp;nbsp;processes color information (&lt;/SPAN&gt;&lt;A style="text-decoration: none; color: #3366cc; background: none #ffffff; border-radius: 2px; overflow-wrap: break-word; font-family: sans-serif; font-size: 16px; font-style: normal; font-variant-ligatures: normal; font-variant-caps: normal; font-weight: 400; letter-spacing: normal; orphans: 2; text-align: start; text-indent: 0px; text-transform: none; widows: 2; word-spacing: 0px; -webkit-text-stroke-width: 0px; white-space: normal;" title="Hue" href="https://en.wikipedia.org/wiki/Hue" target="_blank"&gt;hue&lt;/A&gt;&lt;SPAN&gt;&amp;nbsp;and&amp;nbsp;&lt;/SPAN&gt;&lt;A style="text-decoration: none; color: #3366cc; background: none #ffffff; border-radius: 2px; overflow-wrap: break-word; font-family: sans-serif; font-size: 16px; font-style: normal; font-variant-ligatures: normal; font-variant-caps: normal; font-weight: 400; letter-spacing: normal; orphans: 2; text-align: start; text-indent: 0px; text-transform: none; widows: 2; word-spacing: 0px; -webkit-text-stroke-width: 0px; white-space: normal;" title="Colorfulness" href="https://en.wikipedia.org/wiki/Colorfulness" target="_blank"&gt;colorfulness&lt;/A&gt;&lt;SPAN&gt;) at about a third of the resolution of&amp;nbsp;&lt;/SPAN&gt;&lt;A style="text-decoration: none; color: #3366cc; background: none #ffffff; border-radius: 2px; overflow-wrap: break-word; font-family: sans-serif; font-size: 16px; font-style: normal; font-variant-ligatures: normal; font-variant-caps: normal; font-weight: 400; letter-spacing: normal; orphans: 2; text-align: start; text-indent: 0px; text-transform: none; widows: 2; word-spacing: 0px; -webkit-text-stroke-width: 0px; white-space: normal;" title="Relative luminance" href="https://en.wikipedia.org/wiki/Relative_luminance" target="_blank"&gt;luminance&lt;/A&gt;&lt;SPAN&gt;&amp;nbsp;(lightness/darkness information in an image). Therefore it is possible to&amp;nbsp;&lt;/SPAN&gt;&lt;A style="text-decoration: none; color: #3366cc; background: none #ffffff; border-radius: 2px; overflow-wrap: break-word; font-family: sans-serif; font-size: 16px; font-style: normal; font-variant-ligatures: normal; font-variant-caps: normal; font-weight: 400; letter-spacing: normal; orphans: 2; text-align: start; text-indent: 0px; text-transform: none; widows: 2; word-spacing: 0px; -webkit-text-stroke-width: 0px; white-space: normal;" title="Sampling (signal processing)" href="https://en.wikipedia.org/wiki/Sampling_(signal_processing)" target="_blank"&gt;sample&lt;/A&gt;&lt;SPAN&gt;&amp;nbsp;color information at a lower resolution while maintaining good image quality.&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;"&lt;/P&gt;
&lt;P&gt;The Wikipedia quote above seems to suggest that the 4:2:2 would be more than adequate.&amp;nbsp;&lt;/P&gt;
&lt;P&gt;However, a recent open access paper in Nature says : "&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;It could also be surprising that the foveal resolution limit of red-green patterns is similar to that of achromatic patterns—89 ppd for red-green vs. 94 ppd for achromatic. It must be noted, however, that we did not try to isolate observers’ individual chromatic mechanisms via the heterochromatic flicker paradigm&lt;/SPAN&gt;&lt;SUP style="vertical-align: baseline; font-size: 13.5px; line-height: 0; position: relative; top: -0.5em; box-sizing: inherit; color: #222222; font-family: Harding, Palatino, serif; font-style: normal; font-variant-ligatures: normal; font-variant-caps: normal; font-weight: 400; letter-spacing: normal; orphans: 2; text-align: start; text-indent: 0px; text-transform: none; widows: 2; word-spacing: 0px; -webkit-text-stroke-width: 0px; white-space: normal; background-color: #ffffff; text-decoration-thickness: initial; text-decoration-style: initial; text-decoration-color: initial;"&gt;&lt;A id="ref-link-section-d10595461e557" style="vertical-align: baseline; background-color: transparent; color: #006699; overflow-wrap: break-word; text-decoration: underline 0.0625rem; text-decoration-skip-ink: auto; word-break: break-word; box-sizing: inherit; text-underline-offset: 0.08em;" title="Wagner, G. &amp;amp; Boynton, R. M. Comparison of four methods of heterochromatic photometry. J. Opt. Soc. Am. 62, 1508–1515 (1972)." href="https://www.nature.com/articles/s41467-025-64679-2#ref-CR24" data-track="click" data-track-action="reference anchor" data-track-label="link" data-test="citation-ref" aria-label="Reference 24" target="_blank"&gt;24&lt;/A&gt;&lt;/SUP&gt;&lt;SPAN&gt;, as we wanted to capture data that could generalise across the population. Our results cast doubt on the common practice of chroma sub-sampling found in almost every lossy image and video format, from JPEG image coding to H.265 or AV1 video encoding. The assumption of chroma subsampling is that the resolution of chromatic channels can be reduced twofold in relation to the achromatic channel due to the lower sensitivity of the visual system to high-frequency chromatic contrast. Our data suggests that this only holds for the yellow-violet colour direction, with the maximum resolution of 53 ppd, but not for the red-green direction, consistent with the vision science theory that the isoluminant red-green pathway is the most sensitive opponent-colour channel of the human visual system&lt;/SPAN&gt;&lt;SUP style="vertical-align: baseline; font-size: 13.5px; line-height: 0; position: relative; top: -0.5em; box-sizing: inherit; color: #222222; font-family: Harding, Palatino, serif; font-style: normal; font-variant-ligatures: normal; font-variant-caps: normal; font-weight: 400; letter-spacing: normal; orphans: 2; text-align: start; text-indent: 0px; text-transform: none; widows: 2; word-spacing: 0px; -webkit-text-stroke-width: 0px; white-space: normal; background-color: #ffffff; text-decoration-thickness: initial; text-decoration-style: initial; text-decoration-color: initial;"&gt;&lt;A id="ref-link-section-d10595461e561" style="vertical-align: baseline; background-color: transparent; color: #006699; overflow-wrap: break-word; text-decoration: underline 0.25rem; text-decoration-skip-ink: auto; word-break: break-word; box-sizing: inherit; text-underline-offset: 0.08em;" title="Chaparro, A., Stromeyer III, C. F., Huang, E. P., Kronauer, R. E. &amp;amp; Eskew, R. T. Colour is what the eye sees best. Nature 361, 348–350 (1993)." href="https://www.nature.com/articles/s41467-025-64679-2#ref-CR25" data-track="click" data-track-action="reference anchor" data-track-label="link" data-test="citation-ref" aria-label="Reference 25" target="_blank"&gt;25&lt;/A&gt;&lt;/SUP&gt;&lt;SPAN&gt;.&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;"&lt;/P&gt;
&lt;P&gt;Also, for displaying JPEG images on a screen "&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;We may also want to know how the resolution limit, expressed in ppd units, translates to actual displays and viewing distances. This is shown in Fig.&amp;nbsp;&lt;/SPAN&gt;&lt;A style="vertical-align: baseline; background-color: #ffffff; color: #006699; overflow-wrap: break-word; text-decoration: underline 0.0625rem; text-decoration-skip-ink: auto; word-break: break-word; box-sizing: inherit; text-underline-offset: 0.08em; font-family: Harding, Palatino, serif; font-size: 18px; font-style: normal; font-variant-ligatures: normal; font-variant-caps: normal; font-weight: 400; letter-spacing: normal; orphans: 2; text-align: start; text-indent: 0px; text-transform: none; widows: 2; word-spacing: 0px; -webkit-text-stroke-width: 0px; white-space: normal;" href="https://www.nature.com/articles/s41467-025-64679-2#Fig2" data-track="click" data-track-label="link" data-track-action="figure anchor" target="_blank"&gt;2c&lt;/A&gt;&lt;SPAN&gt;, where we plot the relationship between the display resolution (number of horizontal lines) and the viewing distance (measured in display heights). Our model predictions can be compared with the ITU-R BT.2100-2&lt;/SPAN&gt;&lt;SUP style="vertical-align: baseline; font-size: 13.5px; line-height: 0; position: relative; top: -0.5em; box-sizing: inherit; color: #222222; font-family: Harding, Palatino, serif; font-style: normal; font-variant-ligatures: normal; font-variant-caps: normal; font-weight: 400; letter-spacing: normal; orphans: 2; text-align: start; text-indent: 0px; text-transform: none; widows: 2; word-spacing: 0px; -webkit-text-stroke-width: 0px; white-space: normal; background-color: #ffffff; text-decoration-thickness: initial; text-decoration-style: initial; text-decoration-color: initial;"&gt;&lt;A id="ref-link-section-d10595461e1112" style="vertical-align: baseline; background-color: transparent; color: #006699; overflow-wrap: break-word; text-decoration: underline 0.0625rem; text-decoration-skip-ink: auto; word-break: break-word; box-sizing: inherit; text-underline-offset: 0.08em;" title="ITU-R BT.2100-2. Image Parameter Values for High Dynamic Range Television for Use in Production and International Programme Exchange. Recommendation ITU-R BT.2100-2. 
                  https://www.itu.int/rec/R-REC-BT.2100-2-201807-I/en
                  
                 (International Telecommunication Union, Geneva, 2018)." href="https://www.nature.com/articles/s41467-025-64679-2#ref-CR31" data-track="click" data-track-action="reference anchor" data-track-label="link" data-test="citation-ref" aria-label="Reference 31" target="_blank"&gt;31&lt;/A&gt;&lt;/SUP&gt;&lt;SPAN&gt;&amp;nbsp;recommended viewing distances for television, shown as red horizontal lines in Fig.&amp;nbsp;&lt;/SPAN&gt;&lt;A style="vertical-align: baseline; background-color: #ffffff; color: #006699; overflow-wrap: break-word; text-decoration: underline 0.0625rem; text-decoration-skip-ink: auto; word-break: break-word; box-sizing: inherit; text-underline-offset: 0.08em; font-family: Harding, Palatino, serif; font-size: 18px; font-style: normal; font-variant-ligatures: normal; font-variant-caps: normal; font-weight: 400; letter-spacing: normal; orphans: 2; text-align: start; text-indent: 0px; text-transform: none; widows: 2; word-spacing: 0px; -webkit-text-stroke-width: 0px; white-space: normal;" href="https://www.nature.com/articles/s41467-025-64679-2#Fig2" data-track="click" data-track-label="link" data-track-action="figure anchor" target="_blank"&gt;2c&lt;/A&gt;&lt;SPAN&gt;. Since Full HD (FHD) resolution was not designed to deliver a perfect image, the ITU recommendation of 3.2 display heights falls short of the reproduction below the visibility threshold. Our model indicates that a distance of at least 6 display heights would be necessary to satisfy the acuity limits of 95% of the observers. For 4K and 8K displays, the ITU suggests viewing distances of 1.6–3.2 and 0.8–3.2 display heights, respectively. Our model shows that those ranges are overly conservative and there is little benefit of 8K resolution when sited further than 1.3 display heights from the screen. Used in this way, our model provides a framework to update existing guidelines and to establish new recommendations based on the limitations of our vision. In Fig.&amp;nbsp;&lt;/SPAN&gt;&lt;A style="vertical-align: baseline; background-color: #ffffff; color: #006699; overflow-wrap: break-word; text-decoration: underline 0.0625rem; text-decoration-skip-ink: auto; word-break: break-word; box-sizing: inherit; text-underline-offset: 0.08em; font-family: Harding, Palatino, serif; font-size: 18px; font-style: normal; font-variant-ligatures: normal; font-variant-caps: normal; font-weight: 400; letter-spacing: normal; orphans: 2; text-align: start; text-indent: 0px; text-transform: none; widows: 2; word-spacing: 0px; -webkit-text-stroke-width: 0px; white-space: normal;" href="https://www.nature.com/articles/s41467-025-64679-2#Fig2" data-track="click" data-track-label="link" data-track-action="figure anchor" target="_blank"&gt;2d&lt;/A&gt;&lt;SPAN&gt;, we plot the relation between pixel density (in pixels-per-inch) and viewing distance and show the screen resolution for two different devices. To allow the readers to test their own displays, we created an online display resolution calculator available&amp;nbsp;&lt;/SPAN&gt;&lt;A style="vertical-align: baseline; background-color: #ffffff; color: #006699; overflow-wrap: break-word; text-decoration: underline 0.0625rem; text-decoration-skip-ink: auto; word-break: break-word; box-sizing: inherit; text-underline-offset: 0.08em; font-family: Harding, Palatino, serif; font-size: 18px; font-style: normal; font-variant-ligatures: normal; font-variant-caps: normal; font-weight: 400; letter-spacing: normal; orphans: 2; text-align: start; text-indent: 0px; text-transform: none; widows: 2; word-spacing: 0px; -webkit-text-stroke-width: 0px; white-space: normal;" href="https://www.cl.cam.ac.uk/research/rainbow/projects/display_calc/" target="_blank"&gt;here&lt;/A&gt;&lt;SPAN&gt;.&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;"&lt;/P&gt;
&lt;P&gt;&lt;A href="https://www.nature.com/articles/s41467-025-64679-2" target="_blank"&gt;https://www.nature.com/articles/s41467-025-64679-2&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;Ashraf, M., Chapiro, A. &amp;amp; Mantiuk, R.K. Resolution limit of the eye — how many pixels can we see?.&amp;nbsp;&lt;/SPAN&gt;&lt;I&gt;Nat Commun&lt;/I&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;STRONG style="font-weight: bolder; box-sizing: inherit; color: #222222; font-family: -apple-system, 'system-ui', 'Segoe UI', Roboto, Oxygen-Sans, Ubuntu, Cantarell, 'Helvetica Neue', sans-serif; font-size: 16px; font-style: normal; font-variant-ligatures: normal; font-variant-caps: normal; letter-spacing: normal; orphans: 2; text-align: start; text-indent: 0px; text-transform: none; widows: 2; word-spacing: 0px; -webkit-text-stroke-width: 0px; white-space: normal; background-color: #ffffff; text-decoration-thickness: initial; text-decoration-style: initial; text-decoration-color: initial;"&gt;16&lt;/STRONG&gt;&lt;SPAN&gt;, 9086 (2025). &lt;A href="https://doi.org/10.1038/s41467-025-64679-2" target="_blank"&gt;https://doi.org/10.1038/s41467-025-64679-2&lt;/A&gt;&lt;/SPAN&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Mon, 27 Oct 2025 17:46:44 GMT</pubDate>
      <guid>https://community.usa.canon.com/t5/Camera-Software/JPEG-chroma-subsampling/m-p/572744#M24182</guid>
      <dc:creator>johnrmoyer</dc:creator>
      <dc:date>2025-10-27T17:46:44Z</dc:date>
    </item>
  </channel>
</rss>

