Poster and Demo Instructions
The Demo&Poster session is the place where authors of Poster papers and Demonstration papers showcase their work and meet with interested attendees for in-depth technical discussions. Hence, it is important that you shape your work in an attractive way and keep your message clear and noticeable to attract people who might have an interest in your work. Please carefully follow the instructions below.
IMPORANT: Please send your slides or videos for poster presentations as early as possible and only to email@example.com, preferably as a link.
Poster Preparation: All the Poster and Demo papers should prepare a poster to be displayed during the Demo&Poster Session (see program at http://sigspatial2019.sigspatial.org/program/).
Note: The poster presentation is mandatory. Papers without a poster presented during the Demo&Poster reception (resp. the SRC poster session) will be removed from the conference proceedings.
The size of your poster should be no more than 36 inches wide and 48 inches tall, and in portrait orientation. We will provide you with a backing board of this size along with binder clips to hold the posters to it. Please do not make a poster larger than the recommended size.
The title of your poster should appear at the top in CAPITAL letters about 25mm high. On the left of the Title put your Poster ID or your Demo ID according to the list at the end of this document; Below the title put the author(s)' name(s) and affiliation(s).
The flow of your poster should be from the top left to the bottom right. Use arrows to lead your viewer through the poster. Use color for highlighting and to make your poster more attractive. Use pictures, diagrams, cartoons, figures, etc., rather than text wherever possible. Try to state your main result in 6 lines or less, in lettering about 15mm high so that people can read the poster from a distance. The smallest text on your poster should be at least 9mm high, and the important points should be in a larger size.
Make your poster as self-explanatory as possible. This will save your efforts for technical discussions. You may bring additional audio or visual aids to enhance your presentation. In order to help you interact with the people who attend the session, we suggest that you prepare a short talk of no more than 2 minutes to introduce your work to viewers.
Demo Preparation: Only Demo paper authors need to prepare a demo in addition to the poster.
The presenter should bring their own laptops and devices. Wireless Internet will be available throughout the conference venue. We will try to provide demo paper authors with access to electrical outlets, but please prepare for the scenario where they may not be available to you. If you definitely need power, contact us in advance such that we can arrange with the hotel! The presenter should prepare a demonstration of your system that you can periodically give to those assembled around your table throughout the reception.
Prepare Fast Forward Preview Slides. Only Poster papers are presented in the Fast-Forwad Session!
Authors of each Poster paper should also prepare a 3-slide presentation to give a quick overview of their poster during the "Fast-Forward Preview" session (earlier than the Poster&Demo reception session, see program at http://sigspatial2019.sigspatial.org/program/). The entire session will last 1 hour 30 minutes. Each poster paper will have exactly two minutes to give the audience an overview of your poster. The slides will be converted to a video in order to avoid incompatibilities between different slides, use Animation and Automated Slide Changes to organize the slides to be exactly 2:00 minutes (details below). We prefer 16:9 layouts, though you can use 4:3 as well. You can convert the slides yourself by using MS Powerpoint (or any other tool to generate your presentation) and save as Microsoft Video (WMV) for testing purposes. The first slide should contain enough time for you to introduce yourself and get to the microphone. The Fast-forward Preview slides should be sent to the Poster Chair email address firstname.lastname@example.org, preferably as a link, no later than [11:59PM, October 30th , 2019].
Poster papers without fast-forward slides submitted will be removed from the conference proceedings.
Detailed instructions: Please prepare your presentation in Microsoft PowerPoint. Your presentation should consist of three slides - one slide with the Poster ID, title of the paper, names of authors and affiliations (title slide) and two slides on the content of your work (content slides). You are encouraged to include pictures, screen shots, animations, and movies in your presentation.
Please make your slides "self running" and timed so that they last EXACTLY TWO MINUTES. The title slide should be allocated a minimum of 10 seconds, so that you have enough time to walk to the podium (and also for the preceding presenting author to leave the podium). Please divide the remaining time (one minute and fifty seconds) between the content slides.
All the slides from every poster paper will be put together into a master presentation. The master presentation will be "self-running". This means that you will not have any control over the progression of the slides, which will occur entirely automatically. Therefore, please rehearse and set the transition times accordingly. As well be prepared if you start with a, sometimes, inevitable delay of no more than 5-10 seconds due to whatever reason before you can start.
Demo’s are not presented in the “fast forward” session.
Poster and Demo Awards
In order to encourage the authors of poster and demo papers to participate in the conference and introduce their work, the ACM SIGSPATIAL GIS organizing committee has instituted three awards for the poster and demo authors, which will be honored during the banquet.
- Best Fast Forward Preview Presentation: The best 2-minute presentation given during the fast-forward preview session ("best" from a visual and attention-grabbing standpoint in addition to the scientific value).
- Best Poster: The best actual Poster presented during the Poster&Demo Reception ("best" including an aesthetic sense).
- Best Demo: The best Demo presented during the Poster&Demo Reception.
Please contact the Poster Co-Chairs if you have any questions.
A) Poster Numbers: Please use the following number information on both your slides and your posters in order to simplify the work of the poster awards committee:
|P1||Sadegh Motallebi, Hairuo Xie, Egemen Tanin, Jianzhong Qi and Kotagiri Ramamohanarao||Streaming Route Assignment for Connected Autonomous Vehicles (Systems Paper)|
|P2||Takahiro Yabe, Kota Tsubouchi, Toru Shimizu, Yoshihide Sekimoto and Satish Ukkusuri||City2City: Translating Place Representations across Cities|
|P3||Yifang Yin, Zhenguang Liu, Ying Zhang, Sheng Wang, Rajiv Shah and Roger Zimmermann||GPS2Vec: Towards Generating Worldwide GPS Embeddings|
|P4||Tao Liu, Lexie Yang and Dalton Lunga||Towards Misregistration-Tolerant Change Detection using Deep Learning Techniques with Object-Based Image Analysis|
|P5||Helen Craig, Dragomir Yankov, Renzhong Wang, Pavel Berkhin and Wei Wu||Scaling Address Parsing Sequence Models through Active Learning|
|P6||Padraig Corcoran||Topological Generalization of Continuous Valued Raster Data|
|P7||Wei Shao, Sichen Zhao, Siyu Tan, Arain Prabowo, Piotr Koniusz, Flora D. Salim, Jeffrey Chan, Xinhong Hei and Bradley Fees||Flight Delay Prediction using Airport Situational Awareness Map|
|P8||Munkh-Erdene Yadamjav, Zhifeng Bao, Farhana Choudhury, Hanan Samet and Baihua Zheng||Querying Continuous Periodic Convoys of Interest|
|P9||Michael R. Evans, Renzhong Wang, Dragomir Yankov, Senthil Palanisamy, Siddhartha Arora and Wei Wu||Routines - A System for Inference, Analysis and Prediction of Users Daily Location Visits (Industrial Paper)|
|P10||Qiang Gao, Goce Trajcevski, Fan Zhou, Kunpeng Zhang, Ting Zhong and Fengli Zhang||DeepTrip: Adversarially Understanding Human Mobility for Trip Recommendation|
|P11||Antonios Karatzoglou and Michael Beigl||Semantic-Enhanced Learning (SEL) on Artificial Neural Networks Using the Example of Semantic Location Prediction|
|P12||Kai Zhao, Jie Feng, Zhao Xu, Tong Xia, Yong Li and Depeng Jin||DeepMM: Deep Learning Based Map Matching with Data Augmentation|
|P13||Jiahui Wu, Lingzi Hong and Vanessa Frias-Martinez||Predicting Perceived Level of Cycling Safety for Cycling Trips|
|P14||Tobias Skovgaard Jepsen, Christian Søndergaard Jensen and Thomas Dyhre Nielsen||Graph Convolutional Networks for Road Networks|
|P15||Lars Arge, Allan Grønlund, Jonas Tranberg and Svend Christian Svendsen||Learning to Find Hydrological Corrections|
|P16||Jeff Phillips and Pingfan Tang||Simple Distances for Trajectories via Landmarks|
|P17||Mohamed Ali, Abdeltawab Hendawi, Ashley Song, Peiwei Cao, Zhihong Zhang, Sree Sindhu Sabbineni, Jianwei Shen and John Krumm||Which One is Correct, The Map or The GPS Trace|
|P18||Xun Tang, Jayant Gupta and Shashi Shekhar||Linear Hotspot Discovery on All Simple Paths: A Summary of Results|
|P19||Roxana Herschelman and Kwangsoo Yang||Conflict-Free Evacuation Route Planner: A Summary of Results|
|P20||Hong Wei, Janit Anjaria and Hanan Samet||Learning Embeddings of Spatial, Textual and Temporal Entities in Geotagged Tweets|
|P21||Nicolas Tempelmeier, Udo Feuerhake, Oskar Wage and Elena Demidova||ST-Discovery: Data-Driven Discovery of Spatio-Temporal Dependencies in Urban Road Networks|
|P22||Vaibhav Kulkarni and Benoit Garbinato||20 Years of Mobility Modeling & Prediction: Trends, Shortcomings & Perspectives|
|P23||Kevin Buchin, Anne Driemel, Natasja van de L'Isle and Andre Nusser||klcluster: Center-based Clustering of Trajectories|
|P24||Dimitri Vorona, Andreas Kipf, Thomas Neumann and Alfons Kemper||DeepSPACE: Approximate Geospatial Query Processing with Deep Learning|
|P25||Vassilis Kaffes, Giorgos Giannopoulos, Nikos Karagiannakis and Nontas Tsakonas||Learning Domain Specific Models for Toponym Interlinking|
|P26||Arielle Moro, Vaibhav Kulkarni, Pierre-Adrien Ghiringhelli, Bertil Chapuis and Benoit Garbinato||Breadcrumbs: A Feature Rich Mobility Dataset with Point of Interest Annotation|
|P27||Zipei Fan, Xuan Song, Quanjun Chen, Renhe Jiang, Kota Tsubouchi and Ryosuke Shibasaki||Deep Multiple Instance Learning for Human Trajectory Identification|
|P28||Dimitrios Tsitsigkos, Panagiotis Bouros, Nikos Mamoulis and Manolis Terrovitis||Parallel In-Memory Evaluation of Spatial Joins|
|P29||Tamal Dey, Jiayuan Wang and Yusu Wang||Road Network Reconstruction from satellite images with Machine Learning Supported by Topological Methods|
|P30||Jonas Sauer, Dorothea Wagner and Tobias Zundorf||Efficient Computation of Multi-Modal Public Transit Traffic Assignments using ULTRA|
|P31||Tessa Berry, Nicholas Dronen, Brett Jackson and Ian Endres||Parking Lot Instance Segmentation from Satellite Imagery through Associative Embeddings|
|P32||Srinivasa Raghavendra Bhuvan Gummidi, Esteban Zimanyi, Torben Bach Pedersen and Xike Xie||Push-based Spatial Crowdsourcing for Enriching Semantic Tags in OpenStreetMap|
|P33||Mingxiao Li, Song Gao, Yunlei Liang, Joseph Marks, Yuhao Kang and Moyin Li||A Data-Driven Approach to Understanding and Predicting the Spatiotemporal Availability of Street Parking|
|P34||Nicholas Howe, Jerod Weinman, John Gouwar and Aabid Shamji||Part-Structured Models for Automatically Georeferencing Historical Map Images|
|P35||Subhodip Biswas, Fanglan Chen, Zhiqian Chen, Andreea Sistrunk, Nathan Self, Chang-Tien Lu and Naren Ramakrishnan||REGAL: A Regionalization framework for school boundaries|
|P36||Duy Vo Nguyen Le, Takuto Sakuma, Taiju Ishiyama, Hiroki Toda, Kazuya Arai, Masayuki Karasuyama, Yuta Okubo, Masayuki Sunaga, Yasuo Tabei and Ichiro Takeuchi||Statistically Discriminative Sub-trajectory Mining with Multiple Testing Correction|
|P37||An Yan and Bill Howe||FairST: Equitable Spatial and Temporal Demand Prediction for New Mobility Systems|
|P38||Payas Rajan and Chinya Ravishankar||The Phase Abstraction for Estimating Energy Consumption and Travel Times for Electric Vehicle Route Planning|
B) Demo Numbers: Please use the following number information on both your slides and your posters in order to simplify the work of the demo awards committee:
|D1||Yunfan Kang, Ziang Zhao, Amr Magdy, Win Cowger and Andrew Gray||Scalable Multi-resolution Spatial Visualization for Anthropogenic Litter Data|
|D2||Han Hu, Nhathai Phan, Xinyue Ye, Ruoming Jin, Dejing Dou, Kele Ding and Huy Vo||DrugTracker: A Community-focused Drug Abuse Monitoring and Supporting System using Social Media and Geospatial Data|
|D3||Panote Siriaraya, Yihong Zhang, Yuanyuan Wang, Yukiko Kawai, Peter Jeszenszky and Adam Jatowt||Witnessing Crime through Tweets: A Crime Investigation Tool based on Social Media|
|D4||Mohamed Ali, Abdeltawab Hendawi, Ashley Song, Peiwei Cao, Zhihong Zhang, Sree Sindhu Sabbineni, Jianwei Shen and John Krumm||An Interactive Map-based System for Visually Exploring and Cleaning GPS Traces|
|D5||Joon-Seok Kim, Hamdi Kavak, Umar Manzoor, Andrew Crooks, Dieter Pfoser, Carola Wenk and Andreas Zuefle||Simulating Urban Patterns of Life: A Geo-Social Data Generation Framework|
|D6||Nelly Barret, Fabien Duchateau, Franck Favetta and Ludovic Moncla||Spatial Entity Matching with GeoAlign|
|D7||Younes Hamdani, Remy Thibaud and Christophe Claramunt||A Hybrid Temporal GIS for Coastal Dynamics|
|D8||Tunaggina Khan, Anowarul Kabir, Dieter Pfoser and Andreas Zuefle||CrowdZIP: A System to Improve Reverse ZIP Code Geocoding using Spatial and Crowdsourced Data|
|D9||José Duarte, Bruno Silva, José Moreira, Paulo Dias, Enrico Miranda and Rogério Luís De Carvalho Costa||Towards a qualitative analysis of interpolation methods for deformable moving regions|
|D10||Philip Brown, Yaron Kanza and Velin Kounev||Height and Facet Extraction from LiDAR Point Cloud for Automatic Creation of 3D Building Models|
|D11||Pedro Rossa, Rafael Horota, Alysson Soares Aires, Lucas Kupssinsku, Carolina Jung Kremer, Eniuce Souza, Ademir Marques Jr., Luiz Gonzaga Jr., Mauricio Veronez and Caroline Cazarin||VROffice: interactive and immersive 3D visualization, manipulation and correlation of multivariable georeferenced datasets in virtual reality|
Should your work be missing in this list, please contact the poster chairs as soon as possible.