%0 Conference Proceedings %T Task Proposal: The TL;DR Challenge %A Syed, Shahbaz %A Völske, Michael %A Potthast, Martin %A Lipka, Nedim %A Stein, Benno %A Schütze, Hinrich %Y Krahmer, Emiel %Y Gatt, Albert %Y Goudbeek, Martijn %S Proceedings of the 11th International Conference on Natural Language Generation %D 2018 %8 November %I Association for Computational Linguistics %C Tilburg University, The Netherlands %F syed-etal-2018-task %X The TL;DR challenge fosters research in abstractive summarization of informal text, the largest and fastest-growing source of textual data on the web, which has been overlooked by summarization research so far. The challenge owes its name to the frequent practice of social media users to supplement long posts with a “TL;DR”—for “too long; didn’t read”—followed by a short summary as a courtesy to those who would otherwise reply with the exact same abbreviation to indicate they did not care to read a post for its apparent length. Posts featuring TL;DR summaries form an excellent ground truth for summarization, and by tapping into this resource for the first time, we have mined millions of training examples from social media, opening the door to all kinds of generative models. %R 10.18653/v1/W18-6538 %U https://aclanthology.org/W18-6538 %U https://doi.org/10.18653/v1/W18-6538 %P 318-321