Lea Frermann

Lecturer, CIS, Melbourne University

About

I am a lecturer in natural language processing at CIS, University of Melbourne, and affiliated with the ITTC, and Melbourne Centre for Data Science.

My research focusses on understanding how humans learn about and represent complex and evolving information in the context of large-scale and noisy environments; and on using these insights to develop fairer and more robust automatic systems. I combine methods from natural language processing, machine learning and computational cognitive modelling. Representative projects include scalable models of category and feature learning in children from noisy language data; and modeling the historical change of word meaning over centuries. My current research focusses on improving automatic understanding of narratives, both in fiction (e.g., inducing structured representations of novels; or movie summarization) and in reality, by analyzing framing and narrative strategies in (biased) news stories. As part of our 3-year project 'Fairness in NLP' with Tim Baldwin and Trevor Cohn, I re-visit many of the above applications through the lens of bias and fairness.

Before joining Melbourne University, I was a postdoc / applied scientist at Amazon Core AI in Berlin, and before that a research associate in the Edinburgh NLP group , ILCC, University of Edinburgh, working with Mirella Lapata and Shay Cohen. I spent research visits at Michael Frank's Language and Cognition Lab at Stanford University, at David Blei's lab at Columbia University.


Contact Details
Lecturer
School of Computing and Information Systems
The University of Melbourne
Victoria 3010, Australia

Office: 3.04 Doug McDonell Building
Phon: +61 3 9035 9888
Email: lea.{my_lastname}@unimelb.edu.au



Mini Bio

2019 - Lecturer at Melbourne University
2018 - 2019 Postdoc at Amazon Core AI (Berlin)
2017 - 2018 Research associate at ILCC, University of Edinburgh (collaborators Mirella Lapata and Shay Cohen)
2017 Visiting scholar at Language and Cognition Lab, Stanford University (host Michael Frank)
2013 - 2017 PhD at ILCC, University of Edinburgh (supervisors Mirella Lapata and Charles Sutton)
2016 Machine Learning Internship with Amazon Berlin (3 months)
2010 - 2013 MSc in Language Science and Technology from Saarland University (supervisors Ivan Titov and Manfred Pinkal)
2012 Erasmus Mundus Research exchange to NTU Singapore. Research project with Francis Bond.
2007 - 2010 BA in Linguistics, University of Bremen, Germany.




Collaborators, Postdocs, Students, ...

Current

  • Aili Shen, Research Fellow (2021-- ; with Tim Baldwin and Trevor Cohn)

  • Gisela Vallejo, Ph.D. student, (2021--; co-supervised with Tim Baldwin)
  • Uri Berger, Ph.D. student, (2021--; co-supervised with Omri Abend and Gabi Stanovsky)
  • John Xu, Ph.D. student, (2021--; co-supervised with Charles Kemp and Yang Xu)
  • Shima Khanezar, Ph.D. student, (2020-- ; co-supervised with Andrew Turpin, Gosia Mikolajczak, and Trevor Cohn)
  • Sheilla Njoto, Ph.D. student, (2020-- ; co-supervised with Leah Ruppanner and Marc Cheong)
  • Kemal Kurniawan, Ph.D. student, (2019-- ; co-supervised with Trevor Cohn)
  • Chunhua Liu, Ph.D. student, (2019-- ; co-supervised with Trevor Cohn)

  • Former

  • Shiva Subramanian, Research Fellow (2020--2021) now at ORACLE


  • Invited Talks and Presentations


    09 / 2020 CIS Seminar Series, Melbourne University, Melbourne, Australia.
    Improving Narrative Understanding with Inductive Biases
    01 / 2020 Monash Neuroscience of Consciousness Lab, Monash University, Melbourne, Australia.
    Scaling Concept Learning and Story Understanding Through Natural Language Processing
    11 / 2019 Complex Human Data Hub, Melbourne University, Melbourne, Australia.
    Towards Conceptual Story Understanding
    04 / 2019 Data Science Seminar, Columbia University, New York, USA.
    Learning representations of long narratives for summarization and inference
    04 / 2019 University of Toronto, Toronto, Canada.
    Modeling Dynamics in Language, Learning and Inference
    04 / 2019 Johns Hopkins University, Baltimore, USA.
    Learning representations of long narratives for summarization and inference
    07 / 2018 University of Melbourne, Melbourne, VIC.
    From word learners to crime detectives: bridging the gap between human and machine learning
    04 / 2018 University of Washington, Seattle, USA.
    Title: Whodunnit? Crime Drama as a Case for Natural Language Understanding
    02 / 2018 Univeristät Stuttgart, IMS, Stuttgart, Germany.
    Title: Modelling the Dynamics of fine-grained Change in Word Meaning over Centuries
    10 / 2017 Saarland University, Saarbruecken, Germany.
    Title: Whodunnit? Crime Drama as a Case for Natural Language Understanding
    10 / 2017 Alan Turing Institute, London, UK.
    Title: Structured dynamic models of meaning for understanding language change and representing book plots
    09 / 2017 CoAStaL Copenhagen Natural Language Processing Group, Copenhagen, Denmark.
    Title: Structure and Dynamics of Meaning in Humans and in Language
    08 / 2017 Stanford NLP Seminar Series, Stanford University, USA.
    Title: Of Space Piracy and Secret Baby Romances: Deep Multi-View Book Representations and a Scalable Evaluation Framework
    11 / 2016 Keynote talk at the Drift-a-LOD workshop (co-located with EKAW), Bologna.
    Title: Modelling fine-grained Change in Word Meaning over centuries from Large Collections of Unstructured Text
    12 / 2015 Heriot-Watt University, Edinburgh, Scotland, UK.
    Title: Incremental Bayesian Learning of Semantic Categories and their Features
    09 / 2015 Google NLP PhD Summit, Zürich.
    07 / 2012 Invited Paper at the First Workshop on Multilingual Modeling (in conjunction with ACL 2012), Jeju, Korea.
    Title: Cross-lingual Parse Disambiguation based on Semantic Correspondence




    Publications

  • Thomas Scelsi, Alfonso Martinez Arranz and Lea Frermann (to appear). Principled Analysis of Energy Discourse across Domains with Thesaurus-based Automatic Topic Labeling The 19th Annual Workshop of the Australasian Language Technology Association (ALTA) 2021.

  • Karun Varghese Mathew, Venkata S Aditya Tarigoppula and Lea Frermann (to appear). Multi-modal Intent Classification for Assistive Robots with Large-scale Naturalistic Datasets The 19th Annual Workshop of the Australasian Language Technology Association (ALTA) 2021.

  • Chunhua Liu, Trevor Cohn and Lea Frermann (to appear). Commonsense Knowledge in Word Associations and ConceptNet , Conference on Computational Natural Language Learning (CoNLL) 2021.

  • Subramanian, Shivashankar, Afshin Rahimi, Timothy Baldwin, Trevor Cohn and Lea Frermann (to appear). Fairness-aware Class Imbalanced Learning, EMNLP 2021.

  • Subramanian, Shivashankar, Xudong Han, Timothy Baldwin, Trevor Cohn and Lea Frermann (to appear). Evaluating Debiasing Techniques for Intersectional Biases, EMNLP 2021.

  • Lea Frermann, Mirella Lapata (2021) Categorization in the Wild: Generalizing Cognitive Models to Naturalistic Data across Languages, Proceedings of the 43rd Annual Meeting of the Cognitive Science Society. Data

  • Shima Khanehzar, Trevor Cohn, Gosia Mikolajczak, Andrew Turpin and Lea Frermann (2021) Framing Unpacked: A Semi-Supervised Interpretable Multi-View Model of Media Frames, NAACL 2021. BibTex Code

  • Kemal Kurniawan, Lea Frermann, Philip Schulz and Trevor Cohn (2021) PTST-UoM at SemEval-2021 Task 10: Parsimonious Transfer for Sequence Tagging, SemEval 2021. BibTeX Code

  • Kemal Kurniawan, Lea Frermann, Philip Schulz and Trevor Cohn (2021) PPT: Parsimonious Parser Transfer for Unsupervised Cross-lingual Transfer , EACL 2021. BibTeX Code

  • Nelly Papalampidi, Frank Keller, Lea Frermann, Mirella Lapata (2020) Screenplay Summarization Using Latent Narrative Structure , ACL 2020 BibTeX Data Code

  • Lahari Poddar, Gyorgy Szarvas, Lea Frermann (2019) A Probabilistic Framework for Learning Domain Specific Hierarchical Word Embeddings , arXiv cs.CL 1910.07333.

  • Lea Frermann (2019) Extractive NarrativeQA with Heuristic Pre-Training , 2nd Workshop on Machine Reading for Question Answering (MRQA), Hong Kong.BibTeX Poster

  • Stefanos Angelidis, Diego Marcheggiani, Lluís Màrquez, Roi Blanco and Lea Frermann (2019) Book QA: Stories of Challenges and Opportunities , 2nd Workshop on Machine Reading for Question Answering (MRQA), Hong Kong.BibTeX

  • Nikos Papasarantopoulos, Lea Frermann, Mirella Lapata and Shay B. Cohen (2019) Partners in Crime: Multi-view Sequential Inference for Movie Understanding , EMNLP 2019, Hong Kong.BibTeX

  • Lea Frermann, Alex Klementiev (2019) Inducing Document Structure for Aspect-based Summarization , In proceedings of ACL 2019, Florence, Italy.BibTeX Poster

  • Maria Barrett, Lea Frermann, Ana Valeria Gonzalez-Garduño and Anders Søgaard , (2018) Unsupervised Induction of Linguistic Categories with Records of Reading, Speaking, and Writing , In Proceedings of NAACL-HLT 2018, New Orleans, Louisiana, USA.BibTeX

  • Lea Frermann, Shay B. Cohen and Mirella Lapata, (2018) Whodunnit? Crime Drama as a Case for Natural Language Understanding, Transactions of the Association for Computational Linguistics (TACL) .BibTeX  Data Slides

  • Lea Frermann and Michael C. Frank, (2017) Prosodic Features from Large Corpora of Child-Directed Speech as Predictors of the Age of Acquisition of Words, arXiv cs.CL 1709.09443.repo

  • Lea Frermann and Gyorgy Szarvas, (2017) Inducing Semantic Micro-Clusters from Deep Multi-View Representations of Novels, In Proceedings of the Conference on Empirical Methods on Natural Language Processing (EMNLP), Copenhagen, Denmark.BibTeXPoster

  • Lea Frermann, (2017) Bayesian Models of Category Acquisition and Meaning Development (Ph.D Thesis Abstract). The IEEE Intelligent Informatics Bulletin, 18, (1), 23.

  • Lea Frermann, (2017) Bayesian Models of Category Acquisition and Meaning Development , Ph.D Thesis, University of Edinburgh, Scotland, UK.

  • Lea Frermann and Mirella Lapata, (2016)A Bayesian Model of Diachronic Meaning Change , Transactions of the Association for Computational Linguistics (TACL).BibTeXSlides*Code*

  • Lea Frermann and Mirella Lapata, (2016) Incremental Bayesian Category Learning from Natural Language, Cognitive Science.BibTeX

  • Lea Frermann, (2016)A Bayesian Model of Joint Category and Feature Learning 11th Workshop for Women in Machine Learning (WiML) in conj. with NIPS, Barcelona, Spain.Poster

  • Lea Frermann and Mirella Lapata, (2015) A Bayesian Model for Joint Learning of Categories and their Features, In Proceedings of NAACL-HLT 2015, Denver, Colorado, USA.BibTeXSlides

  • Lea Frermann, (2015) A Bayesian Model of the Temporal Dynamics of Word Meaning 10th Workshop for Women in Machine Learning (WiML) in conj. with NIPS, Montreal, Canada.Poster

  • Lea Frermann and Mirella Lapata, (2014) Incremental Bayesian Learning of Semantic Categories, In Proceedings of EACL 2014, Gothenburg, Sweden.BibTeXDataPoster

  • Lea Frermann, Ivan Titov and Manfred Pinkal, (2014) A Hierarchical Bayesian Model for Unsupervised Induction of Script Knowledge , In Proceedings of EACL 2014, Gothenburg, Sweden.BibTeXSlides

  • Lea Frermann, (2013) A Hierarchical Bayesian Model for Unsupervised Learning of Script Knowledge , MSc Thesis, Saarland University, Germany.

  • Lea Frermann and Francis Bond, (2012) Cross-lingual Parse Disambiguation based on Semantic Correspondence, In Proceedings of ACL 2012, Jeju, Republic of Korea.BibTeX

  • Lea Frermann, (2010) Information Extraction from Written Natural Language Input to Interactive Wayfinding Systems BA Thesis, University of Bremen, Germany.


  • Teaching


    2021 (semester 1) Introduction to Machine Learning (COMP90049)
    2020 (semester 1) Introduction to Machine Learning (COMP90049)
    2019 (semester 2) Knowledge Technologies (COMP90049)