{"id":263,"date":"2019-04-17T21:35:08","date_gmt":"2019-04-17T20:35:08","guid":{"rendered":"https:\/\/iapr-tc10.univ-lr.fr\/?p=263"},"modified":"2019-06-17T14:15:00","modified_gmt":"2019-06-17T13:15:00","slug":"iapr-tc10-newsletter-135-april-2019","status":"publish","type":"post","link":"https:\/\/iapr-tc10.univ-lr.fr\/?p=263","title":{"rendered":"[IAPR-TC10] Newsletter 135 &#8211; April 2019"},"content":{"rendered":"\n<div class=\"wp-block-image\"><figure class=\"aligncenter is-resized\"><img decoding=\"async\" loading=\"lazy\" src=\"https:\/\/iapr-tc10.univ-lr.fr\/wp-content\/uploads\/2019\/03\/new_TC10_version3-1024x571.png\" alt=\"\" class=\"wp-image-312\" width=\"194\" height=\"108\" srcset=\"https:\/\/iapr-tc10.univ-lr.fr\/wp-content\/uploads\/2019\/03\/new_TC10_version3-1024x571.png 1024w, https:\/\/iapr-tc10.univ-lr.fr\/wp-content\/uploads\/2019\/03\/new_TC10_version3-300x167.png 300w, https:\/\/iapr-tc10.univ-lr.fr\/wp-content\/uploads\/2019\/03\/new_TC10_version3-768x429.png 768w, https:\/\/iapr-tc10.univ-lr.fr\/wp-content\/uploads\/2019\/03\/new_TC10_version3.png 1025w\" sizes=\"(max-width: 194px) 100vw, 194px\" \/><\/figure><\/div>\n\n\n\n<div class=\"wp-block-media-text alignwide has-media-on-the-right\"><figure class=\"wp-block-media-text__media\"><img decoding=\"async\" loading=\"lazy\" width=\"640\" height=\"414\" src=\"https:\/\/iapr-tc10.univ-lr.fr\/wp-content\/uploads\/2019\/01\/easter-eggs-252874_640.jpg\" alt=\"\" class=\"wp-image-390\" srcset=\"https:\/\/iapr-tc10.univ-lr.fr\/wp-content\/uploads\/2019\/01\/easter-eggs-252874_640.jpg 640w, https:\/\/iapr-tc10.univ-lr.fr\/wp-content\/uploads\/2019\/01\/easter-eggs-252874_640-300x194.jpg 300w\" sizes=\"(max-width: 640px) 100vw, 640px\" \/><\/figure><div class=\"wp-block-media-text__content\">\n<p>Welcome to the April edition of the TC10 newsletter.<\/p>\n\n\n\n<p>In this edition, you will find the updated call for papers for GREC 2019 with two types of submission (short\/full papers), the last call for participation at ICDAR competitions, the call for nominations for ICDAR 2019 awards, the call for hosting proposal for ICDAR 2023 and the current special issues at PRL.<\/p>\n\n\n\n<p>Happy Easter!<br>Christophe Rigaud<br>IAPR-TC10 Communications Officer<br><\/p>\n<\/div><\/div>\n\n\n\n<p><strong>Table of content:<\/strong><br>1) <a href=\"https:\/\/iapr-tc10.univ-lr.fr\/wp-admin\/post.php?post=263&amp;action=edit#1\">Upcoming deadline and events<\/a><br>2) <a href=\"https:\/\/iapr-tc10.univ-lr.fr\/wp-admin\/post.php?post=263&amp;action=edit#2\">GREC 2019 workshop (updated)<\/a><br>3)<a href=\"https:\/\/iapr-tc10.univ-lr.fr\/wp-admin\/post.php?post=263&amp;action=edit#3\"> ICDAR 2019 competitions<\/a><br>4) <a href=\"https:\/\/iapr-tc10.univ-lr.fr\/wp-admin\/post.php?post=263&amp;action=edit#4\">ICDAR 2019: Call for Nominations for Awards<\/a><br>5) <a href=\"https:\/\/iapr-tc10.univ-lr.fr\/wp-admin\/post.php?post=263&amp;action=edit#5\">ICDAR 2023: Call for Hosting Proposal<\/a><br>6) <a href=\"https:\/\/iapr-tc10.univ-lr.fr\/wp-admin\/post.php?post=263&amp;action=edit#6\">PRL: Special Issue on Hierarchical Representations<\/a><br>7) <a href=\"https:\/\/iapr-tc10.univ-lr.fr\/wp-admin\/post.php?post=263&amp;action=edit#7\">PRL: Special Issue on PR and AI on cultural heritage<\/a><br>8) <a href=\"https:\/\/iapr-tc10.univ-lr.fr\/wp-admin\/post.php?post=263&amp;action=edit#8\">Jobs offers (new)<\/a><\/p>\n\n\n\n<p><strong>Call for Contributions:<\/strong>  please contribute to TC10 newsletters, by sending any relevant news,  event, notice, open position, dataset or link to us on  iapr.tc10@gmail.com<\/p>\n\n\n\n<p><\/p>\n\n\n\n<h2 id=\"1\">1) Upcoming deadlines and events<\/h2>\n\n\n\n<h4>2019<\/h4>\n\n\n\n<ul><li><strong>May 1st<\/strong> nomination submission deadline for ICDAR 2019 Awards<\/li><li><strong>May 8-10<\/strong> conference <a href=\"http:\/\/datech.digitisation.eu\/call-for-papers\/\">DATECH 2019<\/a>, Brussels, Belgium<\/li><li><strong>May 20<\/strong> paper submission deadline for <a href=\"http:\/\/grec2019.univ-lr.fr\">GREC 2019<\/a><\/li><li><strong>May 30<\/strong> paper submission deadline <a href=\"https:\/\/www.journals.elsevier.com\/pattern-recognition-letters\/call-for-papers\/special-issue-hierarchical-representations-new-results\">PRL Special Issue &#8220;Hierarchical Representations&#8221; <\/a><\/li><li><strong>June 1st<\/strong> hosting proposal deadline for ICDAR 2023 <\/li><li><strong>June 30<\/strong>, paper submission deadline <a href=\"https:\/\/www.journals.elsevier.com\/pattern-recognition-letters\/call-for-papers\/pattern-recognition-and-artificial-intelligence-techniques\">PRL special issue &#8220;Cultural Heritage&#8221;<\/a><\/li><li><strong>September 20-21<\/strong> workshop <a href=\"http:\/\/grec2019.univ-lr.fr\">GREC 2019<\/a>, Sydney, Australia<\/li><li><strong>September 22-25<\/strong> conference <a href=\"http:\/\/icdar2019.org\/\">ICDAR 2019<\/a>, Sydney, Australia<\/li><li><strong>October 27-3\/11<\/strong> conference <a href=\"http:\/\/iccv2019.thecvf.com\/\">ICCV 2019<\/a>, Seoul, Korea<\/li><li><strong>November 13<\/strong> paper submission deadline for <a href=\"http:\/\/www.vlrlab.net\/das2020\/\">DAS 2020<\/a><\/li><\/ul>\n\n\n\n<h4>2020<\/h4>\n\n\n\n<ul><li><strong>May 17-20<\/strong>, workshop <a href=\"http:\/\/www.vlrlab.net\/das2020\/\">DAS 2020<\/a><strong>,<\/strong> Wuhan, China<\/li><\/ul>\n\n\n\n<hr class=\"wp-block-separator\"\/>\n\n\n\n<h2 id=\"2\"><a href=\"https:\/\/grec2019.univ-lr.fr\/\">2) GREC 2019<\/a> workshop (update)<\/h2>\n\n\n\n<p>GREC workshops provide an excellent opportunity for researchers and practitioners at all levels of experience to meet colleagues and to share new ideas and knowledge about graphics recognition methods. Graphics Recognition is a subfield of document image analysis that deals with graphical entities in engineering drawings, comics, musical scores, sketches, maps, architectural plans, mathematical notation, tables, diagrams, etc.<\/p>\n\n\n\n<p>The aim of this workshop is to maintain a very high level of interaction and creative discussions between participants, maintaining a &#8220;workshop&#8221; spirit, and not being tempted by a &#8220;mini-conference&#8221; model.<\/p>\n\n\n\n<p>For this edition, authors are invited to submit two types of paper :<\/p>\n\n\n\n<ul><li>Full papers describing complete works of research (up to 6 pages). They will undergo a rigorous review process with a minimum of 2 reviews considering the originality of work.<\/li><li>Short papers providing an opportunity to report on research in progress and to present novel positions on graphic recognition (up to 2 pages). Short papers will also undergo review and will appear in an extra booklet, not in the official proceedings. The booklet will be available on the GREC 2019 website.<\/li><\/ul>\n\n\n\n<p>Full papers will be published according to the same policy and conditions of ICDAR 2019 conference papers (format, length, publication site). See <a href=\"http:\/\/icdar2019.org\/paper-submission\/\" target=\"_blank\" rel=\"noreferrer noopener\" aria-label=\"ICDAR guidelines (opens in a new tab)\">ICDAR guidelines<\/a> for more information.<br> Short papers have to respect the same format than full papers but within the limit of 2 pages.<\/p>\n\n\n\n<p><strong>Important dates<\/strong><\/p>\n\n\n\n<ul><li>Submission deadline: May 20, 2019<\/li><li>Acceptance notification: June 15, 2019<\/li><li>Camera ready due: June 30, 2019<\/li><li>Main Conference: September. 20 &#8211; 21, 2019 <\/li><\/ul>\n\n\n\n<p><a href=\"https:\/\/grec2019.univ-lr.fr\/call-for-papers\/\" target=\"_blank\" rel=\"noreferrer noopener\" aria-label=\" (opens in a new tab)\">See more&#8230;<\/a><\/p>\n\n\n\n<hr class=\"wp-block-separator\"\/>\n\n\n\n<h2 id=\"3\">3) ICDAR 2019 competitions related to graphics recognition<\/h2>\n\n\n\n<p>ICDAR2019 will organize a set of competitions dedicated to a large set of document analysis problems. You are cordially invited to participate to this scientific event that will be a very good opportunity to objectively compare the quality of algorithms on different categories of challenges. You will find the full list of competitions <a href=\"http:\/\/icdar2019.org\/competitions-2\/\">here<\/a>. The following selection are either related to graphics recognition or been asked to be promoted by some members of TC10 community<\/p>\n\n\n\n<ol><li>ICDAR 2019 <a href=\"https:\/\/fgc.univ-lr.fr\/challenge\">Competition on Fine-Grained Classification of Comic Characters<\/a>, registration deadline: <strong>passed<\/strong><\/li><li>ICDAR 2019 <a href=\"https:\/\/orf.univ-lr.fr\/\">Competition on Object Detection\/Recognition in Floorplan image<\/a>, registration deadline: <strong>passed<\/strong><\/li><li>ICDAR2019 <a href=\"http:\/\/rrc.cvc.uab.es\/?ch=13\">Competition on \u201cScanned Receipts OCR and Information Extraction<\/a>, registration: <strong>passed<\/strong><\/li><li>ICDAR2019 <a href=\"http:\/\/rrc.cvc.uab.es\/?ch=15\">Competition: \u201cChallenge on Multi-lingual Scene Text Detection and Recognition\u201d<\/a>, registration deadline: <strong>2 May<\/strong><\/li><li>ICDAR2019 <a href=\"https:\/\/www.zurich.ibm.com\/FormUnderstanding\/\">Competition on \u201cForm Understanding in Noisy Scanned Documents\u201d<\/a>, registration deadline: <strong>passed<\/strong><\/li><\/ol>\n\n\n\n<p><\/p>\n\n\n\n<hr class=\"wp-block-separator\"\/>\n\n\n\n<h2 id=\"4\">4) ICDAR 2019: Call for Nominations for ICDAR 2019 Awards (repost)<\/h2>\n\n\n\n<p>Call for Nominations for ICDAR 2019 Awards<br>International Conference on Document Analysis and Recognition (ICDAR)<br><a href=\"http:\/\/icdar2019.org\">http:\/\/icdar2019.org<\/a><\/p>\n\n\n\n<p>The ICDAR Award Program is an established program designed to recognize individuals who have made outstanding contributions to the field of Document Analysis and Recognition in one or more of the following areas:<\/p>\n\n\n\n<p>o  Research<br>o  Training of students<br>o  Research\/Industry interaction<br>o  Service to the profession<\/p>\n\n\n\n<!--more-->\n\n\n\n<p>Every two years, two awards categories are presented. Namely, the IAPR\/ICDAR Young Investigator Award (less than40 years old at the time the award is made), and the IAPR\/ICDAR Outstanding Achievements Award. Each award will consist of a token gift and a suitably inscribed certificate. The recipient of the Outstanding Achievements award will be invited to give the opening keynote speech at the ICDAR 2019 conference, introduced by the recipient from the previous conference.<\/p>\n\n\n\n<p>Nominations are invited for the ICDAR 2019 Awards in both categories.<br>The nomination packet should include the following:<\/p>\n\n\n\n<p>1. A nominating letter (1 page) including a brief citation to be included in the certificate.<br>2. A brief vitae (2 pages) of the nominee highlighting the accomplishments being recognized.<br>3. Supporting letters (1 page each) from 3 active researchers from at least 3 different countries.<\/p>\n\n\n\n<p>A nomination is usually put forward by a researcher (preferably from a different Institution than the nominee) who is knowledgeable of the scientific achievements of the nominee, and who organizes letters of support.<\/p>\n\n\n\n<p>Submission procedure is strictly confidential, and self-nominations are not allowed.<\/p>\n\n\n\n<p>Please send nominations packets electronically to the Awards Committee Co-Chairs Dimosthenis Karatzas [dimos@cvc.uab.es ] and Alicia Fornes [ afornes@cvc.uab.es] .<\/p>\n\n\n\n<p>Deadline: May 1st, 2019, but early submissions are strongly encouraged.<\/p>\n\n\n\n<p>The final decision will be made by the Awards Committee which is composed of the ICDAR advisory board and the previous awardees.<\/p>\n\n\n\n<p>ICDAR Advisory Board<\/p>\n\n\n\n<hr class=\"wp-block-separator\"\/>\n\n\n\n<h2 id=\"5\">5) ICDAR 2023: Call for Hosting Proposals (repost)<\/h2>\n\n\n\n<p>CALL FOR PROPOSALS TO HOST ICDAR2023<br><br>International Conference on Document Analysis and Recognition (ICDAR)<br><br>The ICDAR Advisory Board is seeking proposals to host the 17th International Conference on Document Analysis and Recognition, to be held in 2023 (ICDAR2023).<\/p>\n\n\n\n<p>ICDAR is the premier IAPR event in the field of Document Analysis and Recognition with 300 to 500 participants. The aim of this conference is to bring together international experts to share their experiences and to promote research and development in all areas of Document Analysis and Recognition.<\/p>\n\n\n\n<!--more-->\n\n\n\n<p>Any consortium interested in making a proposal to host an ICDAR should first familiarize themselves with the&#8221;Guidelines for Organizing and Bidding to Host ICDAR&#8221; document which is available on the TC10 and TC11 websites(www.iapr-tc10.org and www.iapr-tc11.org, respectively).<\/p>\n\n\n\n<p>A link to the most current version of the guidelines appears below. Small updates in the guidelines are expected during the next few weeks, so please check on the Web site of TC11 for the latest version.http:\/\/www.iapr-tc11.org\/mediawiki\/images\/ICDAR_Guidelines_2016_02_27.pdf<\/p>\n\n\n\n<p>The submission of a bid implies full agreement with the rules and procedures outlined in that document.<\/p>\n\n\n\n<p>The submitted proposal must define clearly the items specified in the guidelines (Section 5.2).<\/p>\n\n\n\n<p>It has been the tradition that the location of ICDAR conferences follows a rotating schedule among different continents.Hence, proposals from America are encouraged. However, high quality bids from other locations, for example, from countries where we have had no ICDAR before, will also be considered. Proposals will be examined by the ICDAR Advisory Board.<\/p>\n\n\n\n<p>Proposals should be emailed to:<br>&#8211; Dr. Dimosthenis Karatzas at dimos@cvc.uab.es<br>&#8211; Dr. Alicia Fornes at afornes@cvc.uab.es<\/p>\n\n\n\n<p>Deadline: June 1, 2019<\/p>\n\n\n\n<p>ICDAR Advisory Board<\/p>\n\n\n\n<hr class=\"wp-block-separator\"\/>\n\n\n\n<h2 id=\"6\"><a href=\"https:\/\/www.journals.elsevier.com\/pattern-recognition-letters\/call-for-papers\/special-issue-hierarchical-representations-new-results\">6) Pattern Recognition Letters : Special Issue on Hierarchical Representations<\/a> (repost)<\/h2>\n\n\n\n<p> The proposed virtual special issue will consider extended and updated  versions of papers published at the recent ICPRAI 2018 conference as  well as submissions from anybody proposing innovative methods in the  field of image representation with emphasis, but not restricted to  computer vision and image processing, medical imaging, 2D and 3D images,  multi-modality, remote sensing image analysis, image indexation and  understanding.<\/p>\n\n\n\n<p> Image representations based on hierarchical, scale-space models and  other non-regular \/ irregular grids have become increasingly popular in  image processing and computer vision over the past decades. Indeed, they  allow a modeling of image contents at different (and complementary)  levels of scales, resolutions and semantics. Methods based on such image  representations have been able to tackle various complex challenges  such as multi-scale image segmentation, image filtering, object  detection, recognition, and more recently image characterization and  understanding, potentially involving higher level of semantics.<br><br><a href=\"https:\/\/www.journals.elsevier.com\/pattern-recognition-letters\/call-for-papers\/special-issue-hierarchical-representations-new-results\" target=\"_blank\" rel=\"noreferrer noopener\" aria-label=\" (opens in a new tab)\">See more&#8230;<\/a><\/p>\n\n\n\n<hr class=\"wp-block-separator\"\/>\n\n\n\n<h2 id=\"7\"><a href=\"https:\/\/www.journals.elsevier.com:443\/pattern-recognition-letters\/call-for-papers\/pattern-recognition-and-artificial-intelligence-techniques\">7) Pattern Recognition and Artificial Intelligence Techniques for Cultural Heritage special issue<\/a> (repost)<\/h2>\n\n\n\n<p>Artificial Intelligence is rapidly contaminating new areas of our life day by day. On the other hand, the management of Cultural Heritage is increasingly in need of new solutions to document, manage and visit (even virtually) the enormous amount of artifacts and information that come from the past. The contamination of these two worlds is now a reality and creates the bounds of the main topics of this Virtual Special Issue (VSI).<\/p>\n\n\n\n<p><a href=\"https:\/\/www.journals.elsevier.com\/pattern-recognition-letters\/call-for-papers\/pattern-recognition-and-artificial-intelligence-techniques\" target=\"_blank\" rel=\"noreferrer noopener\" aria-label=\" (opens in a new tab)\">See more&#8230;<\/a><\/p>\n\n\n\n<h2 id=\"8\">8) Job offers (new)<\/h2>\n\n\n\n<h3><strong>&#8212; Post-doc position &#8211; L3i &#8211; La Rochelle France &#8212;<\/strong><\/h3>\n\n\n\n<p><strong>Title\n:<\/strong>\nRecognition of text with variable styles in comics books \n<\/p>\n\n\n\n<p>The\nL3i laboratory has one open post-doc position in computer science, in\nthe specific field of document image analysis and pattern recognition<\/p>\n\n\n\n<p><strong>Duration<\/strong>: 12 months<br><strong>Position available from<\/strong>: Mai 1st, 2019<br><strong>Salary<\/strong>: approximately 2100 \u20ac \/ month (net)<br><strong>Place<\/strong>: L3i lab, University of La Rochelle, France<br><strong>Specialty<\/strong>: Computer Science\/ Image Processing\/ Document Analysis\/ Pattern Recognition<br><strong>Contact<\/strong>: Jean-Christophe BURIE (jcburie [at] univ-lr.fr)<\/p>\n\n\n\n<p><strong>Position\nDescription<\/strong><\/p>\n\n\n\n<p>The\nL3i is a research lab of the University of La Rochelle. La Rochelle\nis a city in the south west of France on the Atlantic coast and is\none of the most attractive and dynamic cities in France. The L3i\nworks since several years on document analysis and has developed a\nwell-known expertise in \u2018Bande dessin\u00e9e\u201d, manga and comics\nanalysis, indexing and understanding.<\/p>\n\n\n\n<p>The\nwork done by the post-doc will take part in the context of the <strong>SAiL<\/strong>\n(Sequential Art Image Laboratory) a joint laboratory involving L3i\nand a private company. The objective is to create innovative tools to\nindex and interact with digital comics. The work will be done in a\nteam of 10 researchers and engineers.<\/p>\n\n\n\n<p>The\nwork will consist in developing original approaches for recognizing\nthe text in the speech balloons. Indeed, the style of the text change\naccording to the writing style chosen by the author. Each author\nusually digitize its own writing to create a personalized font, which\noften looks like a handwriting font. Consequently, from a comic album\nto another the shape of the characters can change a lot. Classic OCR\n(optical character recognition) algorithms give poor results. If the\nOCR is trained, it is only efficient on albums with similar fonts.<\/p>\n\n\n\n<p>The\nlarge variability in character representation needs to developed\nrobust approaches able to adapt themselves to the different writing\nstyle. The main idea will be to develop strategy able to characterize\nand learn a style with few samples. Deep learning based strategies\nwill be studied to reach this goal.<\/p>\n\n\n\n<p><strong>Qualifications<\/strong><\/p>\n\n\n\n<p>Candidates\nmust have a completed PhD and a research experience in image\nprocessing and analysis, pattern recognition especially in text\nrecognition. Some knowledge and experience in deep learning is also\nrecommended. \n<\/p>\n\n\n\n<p><strong>General\nQualifications<\/strong><\/p>\n\n\n\n<p>\u2022 Good programming skills mastering at least one programming language like Java, Python, C\/C++<br>\u2022 Good teamwork skills<br>\u2022 Good writing skills and proficiency in written and spoken English or French<br><\/p>\n\n\n\n<p><strong>Applications<\/strong><\/p>\n\n\n\n<p>Candidates\nshould send a CV and a motivation letter to jcburie [at] univ-lr.fr.<\/p>\n\n\n\n<h3>&#8212;  PhD position &#8211; L3i &#8211; La Rochelle France &#8212;<\/h3>\n\n\n\n<p><strong>Title\n:<\/strong>\nExtraction of complex textual elements \u2013 Application to\nonomatopoeia detection and recognition in comics books \n<\/p>\n\n\n\n<p>The L3i laboratory has one open PhD position in computer science, in the specific field of document image analysis and pattern recognition.<\/p>\n\n\n\n<p><strong>Duration<\/strong>: 36 months<br><strong>Position available from<\/strong>: June 1st, 2019<br><strong>Salary<\/strong>: approximately 1200 \u20ac \/ month (net)<br><strong>Place<\/strong>: L3i lab, University of La Rochelle, France<br><strong>Specialty<\/strong>: Computer Science\/ Image Processing\/ Document Analysis\/ Pattern Recognition<br><strong>Contact<\/strong>: Jean-Christophe BURIE (jcburie [at] univ-lr.fr)<\/p>\n\n\n\n<p><strong>Position\nDescription<\/strong><\/p>\n\n\n\n<p>The\nL3i is a research lab of the University of La Rochelle. La Rochelle\nis a city in the south west of France on the Atlantic coast and is\none of the most attractive and dynamic cities in France. The L3i\nworks since several years on document analysis and has developed a\nwell-known expertise in \u2018Bande dessin\u00e9e\u201d, manga and comics\nanalysis, indexing and understanding.<\/p>\n\n\n\n<p>The work done by the post-doc will take part in the context of the <strong>SAiL<\/strong> (Sequential Art Image Laboratory) a joint laboratory involving L3i and a private company. The objective is to create innovative tools to index and interact with digital comics.<\/p>\n\n\n\n<p>Comics\nare a combination of textual and graphic information. The textual\nelements mainly appear in speech balloons and correspond to the\ndialogues between the characters (heroes) of the story. However\ntextual information also appear in the panels, drowned among the\ngraphic elements in the middle of the action as shown in the figures\nbelow. \n<\/p>\n\n\n\n<p><\/p>\n\n\n\n<ul class=\"is-layout-flex wp-block-gallery-1 wp-block-gallery columns-3 is-cropped\"><li class=\"blocks-gallery-item\"><figure><img decoding=\"async\" src=\"https:\/\/iapr-tc10.univ-lr.fr\/wp-content\/uploads\/2019\/04\/image.png\" alt=\"\" data-id=\"366\" data-link=\"https:\/\/iapr-tc10.univ-lr.fr\/?attachment_id=366\" class=\"wp-image-366\"\/><\/figure><\/li><li class=\"blocks-gallery-item\"><figure><img decoding=\"async\" src=\"https:\/\/iapr-tc10.univ-lr.fr\/wp-content\/uploads\/2019\/04\/image-1.png\" alt=\"\" data-id=\"367\" data-link=\"https:\/\/iapr-tc10.univ-lr.fr\/?attachment_id=367\" class=\"wp-image-367\"\/><\/figure><\/li><li class=\"blocks-gallery-item\"><figure><img decoding=\"async\" src=\"https:\/\/iapr-tc10.univ-lr.fr\/wp-content\/uploads\/2019\/04\/image-2.png\" alt=\"\" data-id=\"368\" data-link=\"https:\/\/iapr-tc10.univ-lr.fr\/?attachment_id=368\" class=\"wp-image-368\"\/><\/figure><\/li><\/ul>\n\n\n\n<p>Samples of onomatopoeia in Franco-Belgium Bandes Dessin\u00e9es <\/p>\n\n\n\n<ul class=\"is-layout-flex wp-block-gallery-3 wp-block-gallery columns-3 is-cropped\"><li class=\"blocks-gallery-item\"><figure><img decoding=\"async\" src=\"https:\/\/iapr-tc10.univ-lr.fr\/wp-content\/uploads\/2019\/04\/image-3.png\" alt=\"\" data-id=\"369\" data-link=\"https:\/\/iapr-tc10.univ-lr.fr\/?attachment_id=369\" class=\"wp-image-369\"\/><\/figure><\/li><li class=\"blocks-gallery-item\"><figure><img decoding=\"async\" src=\"https:\/\/iapr-tc10.univ-lr.fr\/wp-content\/uploads\/2019\/04\/image-4.png\" alt=\"\" data-id=\"370\" data-link=\"https:\/\/iapr-tc10.univ-lr.fr\/?attachment_id=370\" class=\"wp-image-370\"\/><\/figure><\/li><li class=\"blocks-gallery-item\"><figure><img decoding=\"async\" src=\"https:\/\/iapr-tc10.univ-lr.fr\/wp-content\/uploads\/2019\/04\/image-5.png\" alt=\"\" data-id=\"371\" data-link=\"https:\/\/iapr-tc10.univ-lr.fr\/?attachment_id=371\" class=\"wp-image-371\"\/><\/figure><\/li><\/ul>\n\n\n\n<p>Samples of onomatopoeia in American comics and Japanese manga<\/p>\n\n\n\n<p>Theses\ntextual elements are called onomatopoeia. An onomatopoeia is a word\nthat phonetically imitates, resembles, or suggests the sound that it\ndescribes. For example, \u201cmeow\u201d and \u201croar\u201d correspond\nrespectively to the noise of a cat and a lion. Japanese manga always\ninclude also many onomatopoeia that are not just imitative of sounds\nbut cover a much wider range of meanings. So, detecting and\nrecognizing the onomatopoeia can help to understand the content of a\npanel.<\/p>\n\n\n\n<p>The\nresearch topic of this thesis will consist in developing strategies\nto detect, extract and recognize the onomatopoeia, which own variable\ncharacteristics in terms of shape, colour and orientation. The main\ndifficulty is that there are many different style of onomatopoeia.\nSome of them correspond to text mixed with graphic as shown in the\nfollowing image.<\/p>\n\n\n\n<div class=\"wp-block-image\"><figure class=\"aligncenter is-resized\"><img decoding=\"async\" loading=\"lazy\" src=\"https:\/\/iapr-tc10.univ-lr.fr\/wp-content\/uploads\/2019\/04\/image-6.png\" alt=\"\" class=\"wp-image-372\" width=\"163\" height=\"163\"\/><\/figure><\/div>\n\n\n\n<p> Samples of onomatopoeia mixing text and graphic<\/p>\n\n\n\n<p>The\nobjective of the work will be to propose original and robust\napproaches to detect and recognize these complex textual elements in\ndifferent type of comics books.<\/p>\n\n\n\n<p><strong>Qualification<\/strong><\/p>\n\n\n\n<p>Candidates\nmust have a completed Master\u2019s degree in Computer Science with good\nknowledge in image processing, image analysis and pattern\nrecognition. Some knowledge in machine learning and deep learning\nwill be appreciated. \n<\/p>\n\n\n\n<p><strong>General\nQualifications<\/strong><\/p>\n\n\n\n<p>\u2022 Good programming skills mastering at least one programming language like Java, Python, C\/C++<br>\u2022 Good teamwork skills<br>\u2022 Good writing skills and proficiency in written and spoken English or French<\/p>\n\n\n\n<p><strong>Applications<\/strong><\/p>\n\n\n\n<p>Candidates\nshould send a CV and a motivation letter to jcburie [at] univ-lr.fr.<\/p>\n\n\n\n<p class=\"has-small-font-size\"><em>You have received this message because your email address is subscribed to IAPR TC10 mailing list. You can unsubscribe by following this link (link to unsubscribe).<\/em><\/p>\n","protected":false},"excerpt":{"rendered":"<p>Welcome to the April edition of the TC10 newsletter. In this edition, you will find the updated call for papers for GREC 2019 with two types of submission (short\/full papers), [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_exactmetrics_skip_tracking":false,"_exactmetrics_sitenote_active":false,"_exactmetrics_sitenote_note":"","_exactmetrics_sitenote_category":0,"_links_to":"","_links_to_target":""},"categories":[3],"tags":[],"_links":{"self":[{"href":"https:\/\/iapr-tc10.univ-lr.fr\/index.php?rest_route=\/wp\/v2\/posts\/263"}],"collection":[{"href":"https:\/\/iapr-tc10.univ-lr.fr\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/iapr-tc10.univ-lr.fr\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/iapr-tc10.univ-lr.fr\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/iapr-tc10.univ-lr.fr\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=263"}],"version-history":[{"count":63,"href":"https:\/\/iapr-tc10.univ-lr.fr\/index.php?rest_route=\/wp\/v2\/posts\/263\/revisions"}],"predecessor-version":[{"id":559,"href":"https:\/\/iapr-tc10.univ-lr.fr\/index.php?rest_route=\/wp\/v2\/posts\/263\/revisions\/559"}],"wp:attachment":[{"href":"https:\/\/iapr-tc10.univ-lr.fr\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=263"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/iapr-tc10.univ-lr.fr\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=263"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/iapr-tc10.univ-lr.fr\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=263"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}