Yahoo Suche Web Suche

Suchergebnisse

  1. Suchergebnisse:
  1. 2024.esec-fse.org › profile › charlessuttonCharles Sutton - FSE 2024

    Charles Sutton is a Research Scientist at Google Research. He is interested in a broad range of applications of machine learning, including NLP, analysis of computer systems, software engineering, and program synthesis. His work in software engineering has won an ACM Distinguished Paper Award.

  2. Charles Sutton is a Research Scientist at Google DeepMind. His research in machine learning is motivated by a broad range of applications, including natural language processing (NLP), analysis of computer systems, sustainable energy, data analysis, programming languages, and software engineering.

  3. 17. Nov. 2010 · This tutorial describes conditional random fields, a popular probabilistic method for structured prediction. CRFs have seen wide application in natural language processing, computer vision, and bioinformatics.

    • Charles Sutton, Andrew McCallum
    • arXiv:1011.4088 [stat.ML]
    • 2010
    • 90 pages
  4. Articles 1–20. ‪Google, University of Edinburgh‬ - ‪‪Cited by 20,322‬‬ - ‪Machine learning‬ - ‪artificial intelligence‬ - ‪natural language processing‬ - ‪programming languages‬ - ‪software engineering‬.

    • Abstract
    • 2.2 Generative versus Discriminative Models
    • 2.2.2 Sequence Models
    • 2.2.3 Comparison

    Many tasks involve predicting a large number of variables that depend on each other as well as on other observed variables. Structured prediction methods are essentially a combination of classification and graphical modeling. They combine the ability of graphical models to compactly model multivariate data with the ability of classifica-tion method...

    In this section we discuss several examples of simple graphical models that have been used in natural language processing. Although these examples are well-known, they serve both to clarify the definitions in the previous section, and to illustrate some ideas that will arise again in our discussion of CRFs. We devote special attention to the hidden...

    Classifiers predict only a single class variable, but the true power of graphical models lies in their ability to model many variables that are interdependent. In this section, we discuss perhaps the simplest form of dependency, in which the output variables in the graphical model are arranged in a sequence. To motivate this model, we discuss an ap...

    Both generative models and discriminative models describe distribu-tions over (y,x), but they work in different directions. A generative model, such as the naive Bayes classifier and the HMM, is a family of joint distributions that factorizes as p(y,x) = p(y)p(x y), that is, it | describes how to sample, or “generate,” values for features given the...

    • 661KB
    • 109
  5. My research concerns a broad range of applications of probabilistic methods for machine learning, including software engineering, natural language processing, computer security, queueing theory, and sustainable energy.

  6. Charles Sutton. I joined Google in January 2018. My research interests span deep learning, probabilistic machine learning, programming languages, data mining, and software engineering. I'm especially excited about applying deep learning to huge code bases, finding patterns about what makes for good code, leading to tools to help people write ...