An autonomous debate system | Nature

  • 1

    Lawrence, J. & Reed, C. Argument mining: a survey. Comput. Linguist. 45, 765–818 (2019).

    Google Scholar article

  • two

    Devlin, J., Chang, M.-W., Lee, K. & Toutanova, K. BERT: pre-training of deep bidirectional transformers for language understanding. Prepress at https://arxiv.org/abs/1810.04805 (2018).

  • 3

    Peters, M. et al. Representations of deep contextualized words. At the Proc. 2018 Conf. North Am. CH. Assoc. for Computational Linguistics: Human Language Technologies Vol. 1, 2227–2237 (Association for Computational Linguistics, 2018); https://www.aclweb.org/anthology/N18–1202

  • 4

    Radford, A. et al. Language models are unsupervised multitasking students. OpenAI Blog 1, http://www.persagen.com/files/misc/radford2019language.pdf (2019).

  • 5

    Socher, R. et al. Deep recursive models for semantic compositionality in a feeling treebank. At the Proc. Empirical methods in natural language processing (EMNLP) 1631–1642 (Association for Computational Linguistics, 2013).

  • 6

    Yang, Z. et al. XLNet: generalized autoregressive pre-training for language comprehension. At the Adv. In Neural Information Processing Systems (NIPS) 5753−5763 (Curran Associates, 2019).

  • 7

    Cho, K., van Merriënboer, B., Bahdanau, D. & Bengio, Y. On the properties of neural machine translation: encoder-decoder approaches. At the Proc. 8th Worksh. on syntax, semantics and structure in statistical translation 103-111 (Association for Computational Linguistics, 2014).

  • 8

    Gambhir, M. & Gupta, V. Recent techniques of automatic text summarization: a survey. Artif. Intell. Rev. 47, 1-66 (2017).

    Google Scholar article

  • 9

    Young, S., Gašić, M., Thomson, B. & Williams, J. POMDP-based statistical spoken dialogue systems: A review. Proc. IEEE 101, 1160–1179 (2013).

    Google Scholar article

  • 10

    Gurevych, I., Hovy, EH, Slonim, N. & Stein, B. Debating technologies (Dagstuhl 15512 Seminar) Dagstuhl Report 5 (2016).

  • 11

    Levy, R., Bilu, Y., Hershcovich, D., Aharoni, E. & Slonim, N. context-dependent claim detection. At the Proc. COLING 2014, the 25th Int. Conf. in Computational Linguistics: Technical Articles 1489–1500 (Dublin City University and Association for Computational Linguistics, 2014); https://www.aclweb.org/anthology/C14–1141

  • 12

    Rinott, R. et al. Show me your evidence – an automatic method for detecting context-dependent evidence. At the Proc. 2015 Conf. on empirical methods in natural language processing 440–450 (Association for Computational Linguistics, 2015); https://www.aclweb.org/anthology/D15–1050

  • 13

    Shnayderman, I. et al. Fast end-to-end Wikification. Prepress at https://arxiv.org/abs/1908.06785 (2019).

  • 14

    Borthwick, A. A maximum entropy approach for named entity recognition. Doctoral thesis, New York Univ. https://cs.nyu.edu/media/publications/borthwick_andrew.pdf (1999).

  • 15

    Finkel, JR, Grenager, T. & Manning, C. Incorporating non-local information into Gibbs sampling information extraction systems. At the Proc. 43rd Ann. To know. Assoc. for Computational Linguistics 363–370 (Association for Computational Linguistics, 2005).

  • 16

    Levy, R., Bogin, B., Gretz, S., Aharonov, R. & Slonim, N. Towards an argumentative content search engine using weak supervision. At the Proc. 27th Int. Conf. in Computational Linguistics (COLING 2018) 2066–2081, https://www.aclweb.org/anthology/C18-1176.pdf (International Committee for Computational Linguistics, 2018).

  • 17

    Ein-Dor, L. et al. Corpus broad argument mining – a working solution. At the Proc. Thirty-fourth AAAI Conf. in Artificial Intelligence 7683−7691 (AAAI Press, 2020).

  • 18

    Levy, R. et al. Unattended detection of claims across the corpus. At the Proc. 4th Worksh. in Argument Mining 79–84 (Association for Computational Linguistics, 2017); https://www.aclweb.org/anthology/W17–5110

  • 19

    Shnarch, E. et al. Will you mix? Combination of strong and weak labeled data in a neural network for argument mining. At the Proc. 56th Ann. To know. Assoc. for Computational Linguistics Vol. 2, 599–605 (Association for Computational Linguistics, 2018); https://www.aclweb.org/anthology/P18–2095

  • 20

    Gleize, M. et al. Are you convinced? Choosing the most convincing evidence with a Siamese network. At the Proc. 57th Conf. Assoc. for computational linguistics, 967–976 (Association for Computational Linguistics, 2019).

  • 21

    Bar-Haim, R., Bhattacharya, I., Dinuzzo, F., Saha, A. & Slonim, N. Stance classification of claims dependent on context. At the Proc. 15th Conf. EUR. CH. Assoc. for Computational Linguistics Vol. 1, 251–261 (Association for Computational Linguistics, 2017).

  • 22

    Bar-Haim, R., Edelstein, L., Jochim, C. & Slonim, N. Improve the classification of statements with expansion of lexical knowledge and use of context. At the Proc. 4th Worksh. in Argument Mining 32–38 (Association for Computational Linguistics, 2017).

  • 23

    Bar-Haim, R. et al. From surrogacy to adoption; from bitcoin to cryptocurrency: expanding the topic of debate. At the Proc. 57th Conf. Assoc. for Computational Linguistics 977–990 (Association for Computational Linguistics, 2019).

  • 24

    Bilu, Y. et al. Invention of the argument from the first principles. At the Proc. 57th Ann. To know. Assoc. for Computational Linguistics 1013–1026 (Association for Computational Linguistics, 2019).

  • 25

    Ein-Dor, L. et al. Semantic relationship of Wikipedia concepts – reference data and a working solution. At the Proc. Eleventh Int. Conf. in Language Resources and Assessment (LREC 2018) 2571–2575 (Springer, 2018).

  • 26

    Pahuja, V. et al. Joint learning of correlated sequence labeling tasks using two-way recurrent neural networks. At the Proc. Interspeech 548 to 552 (International Speech Communication Association, 2017).

  • 27

    Mirkin, S. et al. Listening comprehension about argumentative content. At the Proc. 2018 Conf. on empirical methods in natural language processing 719–724 (Association for Computational Linguistics, 2018).

  • 28

    Lavee, T. et al. Hearing the claims: listening comprehension using claims mining across the corpus. At the ArgMining Worksh. 58 to 66 (Association for Computational Linguistics, 2019).

  • 29

    Orbach, M. et al. A general purpose refutation data set. At the Proc. 2019 Conf. on empirical methods in natural language processing 5595−5605 (Association for Computational Linguistics, 2019).

  • 30

    Slonim, N., Atwal, GS, Tkačik, G. & Bialek, W. Information-based clustering. Proc. Natl Acad. Sci. USA 102, 18297–18302 (2005).

    MathSciNet CAS Google Scholar ADS Article

  • 31

    Ein Dor, L. et al. Learning thematic similarity metrics of article sections using triple networks. At the Proc. 56th Ann. To know. Assoc. for Computational Linguistics Vol. 2, 49–54 (Association for Computational Linguistics, 2018); https://www.aclweb.org/anthology/P18–2009

  • 32

    Shechtman, S. & Mordechay, M. Emphatic speech prosody prediction with deep Lstm networks. At the 2018 IEEE Int. Conf. in Acoustics, Speech and Signal Processing (ICASSP) 5119–5123 (IEEE, 2018).

  • 33

    Mass, Y. et al. Prediction of word emphasis for expressive text in speech. At the Interspeech 2868–2872 (International Speech Communication Association, 2018).

  • 34

    Feigenblat, G., Roitman, H., Boni, O. & Konopnicki, D. Multiple document summarization focused on unsupervised consultation using the cross entropy method. At the Proc. 40th Int. ACM FOLLOW Conf. in Research and Development in Information Retrieval 961–964 (Association for Computing Machinery, 2017).

  • 35

    Daxenberger, J., Schiller, B., Stahlhut, C., Kaiser, E. & Gurevych, I. Argumentext: classification and grouping argument in a generalized research scenario. Datenbank-Spektrum 20, 115-121 (2020).

  • 36

    Gretz, S. et al. A large-scale data set for classifying the quality of the argument: construction and analysis. At the Thirty-fourth AAAI Conf. in Artificial Intelligence 7805–7813 (AAAI Press, 2020); https://aaai.org/ojs/index.php/AAAI/article/view/6285

  • 37

    Goodfellow, I., Bengio, Y. & Courville, A. Deep Learning (MIT Press, 2016).

  • 38

    Samuel, AL Some studies on machine learning using checkers. IBM J. Res. Develop. 3, 210-229 (1959).

    MathSciNet Google Scholar Article

  • 39

    Thesaurus, G. TD-Gammon, a self-taught backgammon program, reaches the level of master game. Neural Comput. 6, 215-219 (1994).

    Google Scholar article

  • 40

    Campbell, M., Hoane, AJ, Jr & Hsu, F.-h. Deep blue. Artif. Intell. 134, 57-83 (2002).

    Google Scholar article

  • 41

    Ferrucci, DA Introduction to “This is Watson”. IBM J. Res. Dev. 56, 235–249 (2012).

    Google Scholar article

  • 42

    Silver, D. et al. A general reinforcement learning algorithm that dominates chess, shogi and automatic play. Science 362, 1140–1144 (2018).

    MathSciNet CAS Google Scholar ADS Article

  • 43

    Coulom, R. Efficient selectivity and backup operators in Monte-Carlo tree research. At the 5th Int. Conf. on computers and games inria-0011699 (Springer, 2006).

  • 44

    Vinyals, O. et al. Grandmaster level in Starcraft II using multi-agent reinforcement learning. Nature 575, 350–354 (2019).

    Google Scholar CAS ADS Article

  • Source