Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
  • Rapid Communication

Learning phase transitions from dynamics

Evert van Nieuwenburg, Eyal Bairey, and Gil Refael
Phys. Rev. B 98, 060301(R) – Published 9 August 2018
PDFHTMLExport Citation

Abstract

We propose the use of recurrent neural networks for classifying phases of matter based on the dynamics of experimentally accessible observables. We demonstrate this approach by training recurrent networks on the magnetization traces of two distinct models of one-dimensional disordered and interacting spin chains. The obtained phase diagram for a well-studied model of the many-body localization transition shows excellent agreement with previously known results obtained from time-independent entanglement spectra. For a periodically driven model featuring an inherently dynamical time-crystalline phase, the phase diagram that our network traces coincides with an order parameter for its expected phases.

  • Figure
  • Figure
  • Figure
  • Received 23 December 2017
  • Revised 10 April 2018

DOI:https://doi.org/10.1103/PhysRevB.98.060301

©2018 American Physical Society

Physics Subject Headings (PhySH)

Condensed Matter, Materials & Applied PhysicsStatistical Physics & ThermodynamicsGeneral Physics

Authors & Affiliations

Evert van Nieuwenburg1, Eyal Bairey2, and Gil Refael1

  • 1Institute for Quantum Information and Matter, Caltech, Pasadena, California 91125, USA
  • 2Physics Department, Technion, Haifa 3200003, Israel

Article Text (Subscription Required)

Click to Expand

Supplemental Material (Subscription Required)

Click to Expand

References (Subscription Required)

Click to Expand
Issue

Vol. 98, Iss. 6 — 1 August 2018

Reuse & Permissions
Access Options
CHORUS

Article Available via CHORUS

Download Accepted Manuscript
Author publication services for translation and copyediting assistance advertisement

Authorization Required


×

Images

  • Figure 1
    Figure 1

    Unrolling a recurrent network. On the left, (a subpart of) a neural network N is shown with output feeding back into the input, making it into a recurrent neural network. On the right, the unrolled version of the same network is shown, detailing that the output at step t is fed back as an input for time step t+1. The recurrent connections have their own weights that are optimized during training.

    Reuse & Permissions
  • Figure 2
    Figure 2

    Detecting the MBL transition in the random-field Heisenberg model (1). In the left panel, we show the dependence of the network's confusion C on the number of LSTM neurons N for ɛ=0.5 and a fixed set of parameters: dropout 0.2, l2=0.01, batch size of 64 and 25 training epochs. The right panel shows the resulting phase diagram (the color bar represents the confusion C) in the ɛ vs W plane, obtained with N=32 and averaged over ten retrainings.

    Reuse & Permissions
  • Figure 3
    Figure 3

    A recurrent neural network distinguishes between three dynamical phases of a time-dependent model, after being trained on example curves m(t) at ε=0, 0.7, and π/2. The gray curves show the outputs of the three neurons assigned to recognize each of the three phases (time-crystalline, Floquet-ergodic, and Floquet-MBL). In green (with dots) the confusion C of the network is shown, indicating two transition points between these phases. In orange we show the long-time imbalance I(t) measured at an odd driving period, taking a negative value in the time-crystalline phase, a vanishing value in the Floquet-ergodic phase, and a positive value in the Floquet-MBL phase. The phase boundaries extracted by the network are consistent with, and seem sharper than, the imbalance.

    Reuse & Permissions
×

Sign up to receive regular email alerts from Physical Review B

Log In

Cancel
×

Search


Article Lookup

Paste a citation or DOI

Enter a citation
×