{"id":225,"date":"2020-10-06T22:35:10","date_gmt":"2020-10-06T22:35:10","guid":{"rendered":"http:\/\/cas.polito.it\/NXP-LANDTIGER@PoliTo-University\/?p=225"},"modified":"2020-10-09T14:32:20","modified_gmt":"2020-10-09T14:32:20","slug":"display-simple-landtiger-nxp-lpc1768-video-player","status":"publish","type":"post","link":"https:\/\/cas.polito.it\/NXP-LANDTIGER@PoliTo-University\/?p=225","title":{"rendered":"DISPLAY \u2013 Simple LandTiger NXP LPC1768 video player"},"content":{"rendered":"\n<p>The aim of this article is to briefly explain how to play a short video (in terms of sequence of frames) using the LandtTiger LPC1768 board and the connected <a rel=\"noreferrer noopener\" href=\"https:\/\/cdn-shop.adafruit.com\/datasheets\/ILI9325.pdf\" target=\"_blank\">ILI9325 LCD<\/a> screen (ILI9320 on some board).<\/p>\n\n\n\n<p>First of all we need to find out a way to convert an input video in a compatible format. In order to do that, the most direct way &#8211; and the one we will take &#8211; is to extract a sequence of frames from it and then print those extracted frames to the screen, with specific delay constraints.<\/p>\n\n\n\n<h3>Extracting frames<\/h3>\n\n\n\n<p>For this purpose the choice fell on <em>ffmpeg<\/em> &#8211; an extremely powerful tool that converts and edits audio and video streams.<\/p>\n\n\n\n<p><a href=\"https:\/\/ffmpeg.org\/download.html\">After installing it<\/a>, let&#8217;s extract a resized portion of the input video<\/p>\n\n\n\n<p style=\"color:#4d4d4d;text-align:left\" class=\"has-text-color has-small-font-size code\">ffmpeg -i INPUT_VIDEO -vf scale=&#8221;-2:HEIGHT,crop=WIDTH:HEIGHT&#8221; -ss START_TIME -t DURATION &#8220;VIDEO_NAME_cut.mp4&#8221;<\/p>\n\n\n\n<p>where the non-obvious parameter are <em>WIDTH<\/em> and <em>HEIGHT<\/em> since:<\/p>\n\n\n\n<ul><li>in the first part you can omit one of the two (by replacing with a <em>-1<\/em> or <em>-2<\/em> as appropriate)<\/li><li>in the <em>crop<\/em> part they must be consistent with the scaled down size above specified<\/li><\/ul>\n\n\n\n<p>Now let&#8217;s actually extract the frames<\/p>\n\n\n\n<p style=\"color:#4d4d4d;text-align:left\" class=\"has-text-color has-small-font-size code\">ffmpeg -i &#8220;VIDEO_NAME_cut.mp4&#8221; -vf fps=1\/FRAME_OFFSET -vcodec rawvideo -f rawvideo -pix_fmt rgb565be -f image2 .\/$filename%03d.raw<\/p>\n\n\n\n<p>I will explain in detail some of the arguments of the previous command:<\/p>\n\n\n\n<ul><li><em>fps=1\/<strong>FRAME_OFFSET<\/strong> -&gt;<\/em> <em>&#8220;take a frame from source every <small>FRAME_OFFSET<\/small> seconds&#8221;<\/em><\/li><li><em><strong>rawvideo<\/strong> -&gt; no header inserted, only pixel representation<\/em><\/li><li><em>-pix_fmt rgb<strong>565<\/strong>be -&gt;<\/em> the flow of data from the Board to the LCD is &#8211; <a href=\"#lpc_note\">#in theory<\/a> &#8211; 16-bit based. What we are asking here is <em>&#8220;I want every pixel to be represented in RGB 16 bit convention &#8211; Big Endian&#8221;<\/em> <small>[<a href=\"https:\/\/en.wikipedia.org\/wiki\/High_color#16-bit_high_color\">more here<\/a>]<\/small> <\/li><li>output frames will be saved with the <em>.raw<\/em> extension and with increasing numbers in the filename<\/li><\/ul>\n\n\n\n<h3>Image conversion<\/h3>\n\n\n\n<p>Now that we have in our folder a bunch of <em>.raw<\/em> frames, the next step is to convert them into a <em><strong>C-like<\/strong><\/em> structure, that will be imported into our project.<\/p>\n\n\n\n<p>As shown in a <a href=\"http:\/\/cas.polito.it\/NXP-LANDTIGER@PoliTo-University\/?p=81#images\">previous Special Project<\/a> (<em>thanks to Gianni Cito<\/em>), the free and open source image editor GIMP comes to our rescue since it has <em>&#8220;Source Code&#8221;<\/em> as output format for our input image. Unfortunally &#8211; unless we master the GIMP&#8217;s <em>Script<\/em>&#8211;<em>Fu scripting language<\/em> &#8211; this is a time-consuming operation and does not allow us to convert multiple images all at once.<\/p>\n\n\n\n<p>We already have, instead, a partial conversion, <em>i.e.<\/em> a <strong>flow of bytes <\/strong>that can be converted in a C array with the following commands<\/p>\n\n\n\n<p style=\"color:#4d4d4d;text-align:left\" class=\"has-text-color has-small-font-size code\">find . -name &#8220;*.raw&#8221; | xargs -r -I{} cat &#8220;{}&#8221; &gt; video_merged<br>xxd -i video_merged &gt; array.h<\/p>\n\n\n\n<p>The former one merges all the frames into a big file &#8211; <em>that somehow represents our video<\/em> &#8211; while the latter <strong><a href=\"http:\/\/manpages.ubuntu.com\/manpages\/xenial\/en\/man1\/xxd.1.html#:~:text=xxd%20creates%20a%20hex%20dump,of%20decoding%20to%20standard%20output.\">makes a dump<\/a><\/strong> of the input file and finally generates the <em>.h <\/em>array!<\/p>\n\n\n\n<h3>Ok.. and now?<\/h3>\n\n\n\n<p>After this long conversion phase, what we have is an array of <em>unsigned char[] 16-bit value<\/em> and we want to make it a <strong>const char[]<\/strong> one.<\/p>\n\n\n\n<p>Why <strong>const<\/strong>?<\/p>\n\n\n\n<p>Because the LandTiger LPC1768 has 512kB of Read-Only flash memory &#8211; which is a relatively large amount of memory &#8211; that we can use instead of filling the <em>code memory<\/em>.<\/p>\n\n\n\n<h3>How does it work?<\/h3>\n\n\n\n<ol><li>Import the converted <em>.h<\/em> array<\/li><li>Create a <em>video<\/em> object from it<\/li><li>Set the appropriate RIT interval value<\/li><li>Enable RIT<\/li><\/ol>\n\n\n\n<p>You can find the code, the implementation details and an explaination of the import operations on the <a href=\"http:\/\/cas.polito.it\/gitlab\/nxp-landtiger\/display-advanced\/video-display\/tree\/progress\">GitLab repository<\/a>.<\/p>\n\n\n\n<h3>More about timing<\/h3>\n\n\n\n<p>In point 3) I said that we must find a proper value for the RIT initialization. A strict timing must be respected in order to obtain an acceptable video playback.<\/p>\n\n\n\n<p>First of all, we need to wait <em>a certain number of<\/em> milliseconds between two consecutive frames, given by input video&#8217;s fps.<br> For example, a 25fps video will lead to a <code>40ms<\/code> delay between frames.<\/p>\n\n\n\n<p>Then, we need a <strong>fast<\/strong> way of transferring the frame&#8217;s byte stream to the LCD. The GLCD library comes with a <code>LCD_SetPoint<\/code> function, which accepts the X-Y coordinates and the 16-bit color value as parameters, but it is <strong>too slow<\/strong> for video playback purposes! In fact, what it does is to perform three writes to the LCD, two for the indexes selection (to LCD&#8217;s registers R20h and R21h) and one for the output RGB value (R22h).<\/p>\n\n\n\n<div id=\"lpc_note\"><\/div>\n\n\n\n<p style=\"font-size:14px;text-align:left\" class=\"has-background has-white-background-color\"><strong><em>Please note that<\/em><\/strong><em> each write actually hides a double write to LCD (one to the IR index register and one with the actual value). Furthermore, although the ILI932X LCD contemplates a 16-bit interface, the way it is connected to this board does only allow a 8-bit transfer (only 8 out of 16 pins connected). <\/em><\/p>\n\n\n\n<p>Definitely, we need to dig into the LCD manual in order to find a faster (and more constant in terms of delays) way of transferring pixels.<\/p>\n\n\n\n<p>Luckily, ILI932x LCD offers a <strong>Window Address Area<\/strong> mode, that allows the user to select a subset of the screen. By writing to the R50-53h registers we set the four corners coordinates. Finally, we can send all the frame&#8217;s pixels by performing a single index register write (R22h: data transfer) and a flow of pixels transfer. Refer to the following picture<\/p>\n\n\n\n<figure class=\"wp-block-image\"><img loading=\"lazy\" width=\"1024\" height=\"444\" src=\"http:\/\/cas.polito.it\/NXP-LANDTIGER@PoliTo-University\/wp-content\/uploads\/2020\/10\/WindowAreaTiming-1024x444.png\" alt=\"\" class=\"wp-image-239\" srcset=\"https:\/\/cas.polito.it\/NXP-LANDTIGER@PoliTo-University\/wp-content\/uploads\/2020\/10\/WindowAreaTiming-1024x444.png 1024w, https:\/\/cas.polito.it\/NXP-LANDTIGER@PoliTo-University\/wp-content\/uploads\/2020\/10\/WindowAreaTiming-300x130.png 300w, https:\/\/cas.polito.it\/NXP-LANDTIGER@PoliTo-University\/wp-content\/uploads\/2020\/10\/WindowAreaTiming-768x333.png 768w, https:\/\/cas.polito.it\/NXP-LANDTIGER@PoliTo-University\/wp-content\/uploads\/2020\/10\/WindowAreaTiming.png 1033w\" sizes=\"(max-width: 709px) 85vw, (max-width: 909px) 67vw, (max-width: 1362px) 62vw, 840px\" \/><figcaption>Note: with the basic <code>LCD_SetPoint<\/code> we would have needed at least 5 writes between each consecutive pixel (not taking into account the 16-bit interface problem)<\/figcaption><\/figure>\n\n\n\n<h3>Showcase<\/h3>\n\n\n\n<figure class=\"wp-block-video\"><video controls src=\"http:\/\/cas.polito.it\/NXP-LANDTIGER@PoliTo-University\/wp-content\/uploads\/2020\/10\/vertical.mp4\"><\/video><\/figure>\n\n\n\n<figure class=\"wp-block-video\"><video controls src=\"http:\/\/cas.polito.it\/NXP-LANDTIGER@PoliTo-University\/wp-content\/uploads\/2020\/10\/horizontal.mp4\"><\/video><\/figure>\n\n\n\n<h3>In conclusion<\/h3>\n\n\n\n<p>A few other words to summarize our results and outline some open problems.<\/p>\n\n\n\n<p>We&#8217;ve managed to display a short sequence of frames so as to give us the feeling of a video. The conversion sequence is automatic so that we can start it and<em> <\/em>import everything inside our project.<\/p>\n\n\n\n<p>You are supposed to launch the commands on a Unix\/Linux distribution, even if there are <a href=\"https:\/\/gitforwindows.org\/\">tools<\/a> that make them available on Windows too.<\/p>\n\n\n\n<p>\u2705<strong>Achieved goals<\/strong><\/p>\n\n\n\n<ul><li>Video playback<\/li><li>Convert input video<\/li><li>Chance of zooming in the image <em>(there will be losses on quality though)<\/em><\/li><\/ul>\n\n\n\n<p>\u274c  <strong>Limitations<\/strong><\/p>\n\n\n\n<ul><li>Unfortunately, the 512kB flash memory is big enough to only fit small informations <em>(basically a sequence of some dozens of 50&#215;50 px frames)<\/em><\/li><li>Long Board flashing times<\/li><li>Complex inclusion operations: we need to include the headers and re-compile every time<\/li><\/ul>\n\n\n\n<p>\ud83d\udcfd\ufe0f  We have overcome these limitations with our improved <a href=\"http:\/\/cas.polito.it\/gitlab\/nxp-landtiger\/audio_video-player\/audio-visual-player-reading-data-from-sd-card\">Audio Visual player<\/a>, in which the SD card is used to load any pre-converted video and a &#8211; &#8211; somehow complex and tailored &#8211; synchronization between audio and video allows the correct playback (in a slightly higher resolution)! <\/p>\n\n\n\n<hr class=\"wp-block-separator\" \/>\n\n\n\n<p style=\"text-align:center\" class=\"has-text-color has-small-font-size has-medium-gray-color\">Federico Bitondo, s276294<br><br>Prof. Paolo Bernardi<br><\/p>\n","protected":false},"excerpt":{"rendered":"<p>The aim of this article is to briefly explain how to play a short video (in terms of sequence of frames) using the LandtTiger LPC1768 board and the connected ILI9325 LCD screen (ILI9320 on some board). First of all we need to find out a way to convert an input video in a compatible format. &hellip; <a href=\"https:\/\/cas.polito.it\/NXP-LANDTIGER@PoliTo-University\/?p=225\" class=\"more-link\">Continue reading<span class=\"screen-reader-text\"> &#8220;DISPLAY \u2013 Simple LandTiger NXP LPC1768 video player&#8221;<\/span><\/a><\/p>\n","protected":false},"author":9,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":[],"categories":[1],"tags":[],"_links":{"self":[{"href":"https:\/\/cas.polito.it\/NXP-LANDTIGER@PoliTo-University\/index.php?rest_route=\/wp\/v2\/posts\/225"}],"collection":[{"href":"https:\/\/cas.polito.it\/NXP-LANDTIGER@PoliTo-University\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/cas.polito.it\/NXP-LANDTIGER@PoliTo-University\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/cas.polito.it\/NXP-LANDTIGER@PoliTo-University\/index.php?rest_route=\/wp\/v2\/users\/9"}],"replies":[{"embeddable":true,"href":"https:\/\/cas.polito.it\/NXP-LANDTIGER@PoliTo-University\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=225"}],"version-history":[{"count":47,"href":"https:\/\/cas.polito.it\/NXP-LANDTIGER@PoliTo-University\/index.php?rest_route=\/wp\/v2\/posts\/225\/revisions"}],"predecessor-version":[{"id":277,"href":"https:\/\/cas.polito.it\/NXP-LANDTIGER@PoliTo-University\/index.php?rest_route=\/wp\/v2\/posts\/225\/revisions\/277"}],"wp:attachment":[{"href":"https:\/\/cas.polito.it\/NXP-LANDTIGER@PoliTo-University\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=225"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/cas.polito.it\/NXP-LANDTIGER@PoliTo-University\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=225"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/cas.polito.it\/NXP-LANDTIGER@PoliTo-University\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=225"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}