Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
research-article

GM bets big on batteries: A new $2.3 billion plant cranks out Ultium cells to power a future line of electric vehicles

Published: 01 December 2020 Publication History
  • Get Citation Alerts
  • Abstract

    In April of 1966, a shiny white Chevrolet Impala became the first car off the assembly line of a new General Motors plant in Lordstown, Ohio. It was the glorious start of what became a checkered history for the area. This blue-collar town survived an infamous labor strike in 1972, the Chapter 11 bankruptcy of GM in 2009, and a string of unmemorable small cars—including the Chevy Vega and Cavalier—before emerging as a symbol of industrial rebirth with the production of the Chevy Cruze in 2010. □ But things soon went to hell again, and GM shuttered the plant in 2019. Even then, the pain wasn't over. The plant became a political football for President Trump, who urged local residents not to sell their homes, because of jobs he promised to restore. He later rebuked GM for not building COVID-19 ventilators in a factory it no longer owned. (By that time, GM had sold the mothballed plant to Lordstown Motors Corp., a long-shot electric-truck startup.) □ Now, nine-lives Lordstown is getting another chance to play a significant role in the automotive future. Whether it succeeds hinges on the biggest multibillion-dollar question in the global auto industry: Can GM, or any legacy automaker for that matter, transform itself into a true rival to Tesla, whose electric cars—and sky-high stock price—dominate the EV space? To do that, it'll need better, stronger, more-affordable batteries. That's where GM's Ultium project comes in.

    References

    [1]
    D. Riboni, C. Bettini, G. Civitarese, Z. H. Janjua, and V. Bulgari, “From lab to life: Fine-grained behavior monitoring in the elderly's home,” in Proc. IEEE Int. Conf. Pervasive Comput. Commun. Workshops, 2015, pp. 342–347.
    [2]
    A. Bulling, U. Blanke, and B. Schiele, “A tutorial on human activity recognition using body-worn inertial sensors,” ACM Comput. Surveys, vol. 46, no. 3, pp. 33:1–33:33, 2014.
    [3]
    T. Gu, L. Wang, Z. Wu, X. Tao, and J. Lu, “A pattern mining approach to sensor-based human activity recognition,” IEEE Trans. Knowl. Data Eng., vol. 23, no. 9, pp. 1359–1372, Sep. 2011.
    [4]
    D. J. Cook, K. D. Feuz, and N. C. Krishnan, “Transfer learning for activity recognition: A survey,” Knowl. Inf. Syst., vol. 36, no. 3, pp. 537–556, 2013.
    [5]
    K. D. Feuz and D. J. Cook, “Real-time annotation tool (RAT),” in Proc. AAAI Workshops, 2013.
    [6]
    D. Roggen, A. Calatroni, et al., “Collecting complex activity datasets in highly rich networked sensor environments,” in Proc. Int. Conf. Netw. Sensing Syst., 2010, pp. 233–240.
    [7]
    L. Chen, C. D. Nugent, and H. Wang, “A knowledge-driven approach to activity recognition in smart homes,” IEEE Trans. Knowl. Data Eng., vol. 24, no. 6, pp. 961–974, Jun. 2012.
    [8]
    N. D. Rodríguez, M. P. Cuéllar, J. Lilius, and M. D. Calvo-Flores, “A survey on ontologies for human behavior recognition,” ACM Comput. Surveys, vol. 46, no. 4, 2014, Art. no.
    [9]
    G. Acampora, D. J. Cook, P. Rashidi, and A. V. Vasilakos, “A survey on ambient intelligence in healthcare,” Proc. IEEE, vol. 101, no. 12, pp. 2470–2494, Dec. 2013.
    [10]
    J. Ye, S. Dobson, and S. McKeever, “Situation identification techniques in pervasive computing: A review,” Pervasive Mobile Comput., vol. 8, no. 1, pp. 36–66, 2012.
    [11]
    B. C. Grau, I. Horrocks, B. Motik, B. Parsia, P. F. Patel-Schneider, and U. Sattler, “OWL 2: The next step for OWL,” J. Web Semantics, vol. 6, no. 4, pp. 309–322, 2008.
    [12]
    M. Richardson and P. Domingos, “Markov logic networks,” Mach. Learn., vol. 62, no. 1, pp. 107–136, 2006.
    [13]
    D. Riboni, T. Sztyler, G. Civitarese, and H. Stuckenschmidt, “Unsupervised recognition of interleaved activities of daily living through ontological and probabilistic reasoning,” in Proc. ACM Int. Joint Conf. Pervasive Ubiquitous Comput., 2016, pp. 1–12.
    [14]
    D. Weinland, R. Ronfard, and E. Boyer, “A survey of vision-based methods for action representation, segmentation and recognition,” Comput. Vis. Image Understanding, vol. 115, no. 2, pp. 224–241, 2011.
    [15]
    L. Bao and S. S. Intille, “Activity recognition from user-annotated acceleration data,” in Proc. Int. Conf. Pervasive Comput., 2004, pp. 1–17.
    [16]
    J. Lester, T. Choudhury, N. Kern, G. Borriello, and B. Hannaford, “A hybrid discriminative/generative approach for modeling human activities,” in Proc. Int. Joint Conf. Artif. Intell., 2005, pp. 766–772.
    [17]
    D. Riboni, C. Bettini, G. Civitarese, Z. H. Janjua, and R. Helaoui, “SmartFABER: Recognizing fine-grained abnormal behaviors for early detection of mild cognitive impairment,” Artif. Intell. Med., vol. 67, pp. 57–74, 2016.
    [18]
    J. J.-C. Ying, B.-H. Lin, V. S. Tseng, and S.-Y. Hsieh, “Transfer learning on high variety domains for activity recognition,” in Proc. ASE BigData Social Informatics, 2015, pp. 37:1–37:6.
    [19]
    R. S. Sutton, A. G. Barto, et al., Reinforcement Learning: An Introduction. Cambridge, MA, USA: MIT Press, 1998.
    [20]
    L. Fei-Fei, R. Fergus, and P. Perona, “One-shot learning of object categories,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 28, no. 4, pp. 594–611, Apr. 2006.
    [21]
    P. Palmes, H. K. Pung, T. Gu, W. Xue, and S. Chen, “Object relevance weight pattern mining for activity recognition and segmentation,” Pervasive Mobile Comput., vol. 6, no. 1, pp. 43–57, 2010.
    [22]
    D. Riboni and M. Murtas, “Web mining & computer vision: New partners for object-based activity recognition,” in Proc. IEEE 26th Int. Conf. Enabling Technol.: Infrastructure Collaborative Enterprises, 2017, pp. 158–163.
    [23]
    D. Riboni and C. Bettini, “OWL 2 modeling and reasoning with complex human activities,” Pervasive Mobile Comput., vol. 7, no. 3, pp. 379–395, 2011.
    [24]
    G. Meditskos, E. Kontopoulos, and I. Kompatsiaris, “Knowledge-driven activity recognition and segmentation using context connections,” in Proc. Int. Semantic Web Conf., 2014, pp. 260–275.
    [25]
    G. Okeyo, L. Chen, H. Wang, and R. Sterritt, “Dynamic sensor data segmentation for real-time knowledge-driven activity recognition,” Pervasive Mobile Comput., vol. 10, Part B, pp. 155–172, 2014.
    [26]
    D. Riboni and C. Bettini, “COSAR: Hybrid reasoning for context-aware activity recognition,” Pers. Ubiquitous Comput., vol. 15, no. 3, pp. 271–289, 2011.
    [27]
    R. Helaoui, D. Riboni, and H. Stuckenschmidt, “A probabilistic ontological framework for the recognition of multilevel human activities,” in Proc. ACM Int. Joint Conf. Pervasive Ubiquitous Comput., 2013, pp. 345–354.
    [28]
    J. Ye, G. Stevenson, and S. Dobson, “USMART: An unsupervised semantic mining activity recognition technique,” ACM Trans. Interactive Intell. Syst., vol. 4, no. 4, pp. 16:1–16:27, 2014.
    [29]
    J. Wan, M. J. O'Grady, and G. M. P. O'Hare, “Dynamic sensor event segmentation for real-time activity recognition in a smart home context,” Pers. Ubiquitous Comput., vol. 19, no. 2, pp. 287–301, 2015.
    [30]
    S. Aminikhanghahi and D. J. Cook, “Using change point detection to automate daily activity segmentation,” in Proc. IEEE Int. Conf. Pervasive Comput. Commun. Workshops, 2017, pp. 262–267.
    [31]
    D. Triboan, L. Chen, F. Chen, and Z. Wang, “Semantic segmentation of real-time sensor data stream for complex activity recognition,” Pers. Ubiquitous Comput., vol. 21, no. 3, pp. 411–425, 2017.
    [32]
    F. Baader, D. Calvanese, D. L. McGuinness, D. Nardi, and P. F. Patel-Schneider, The Description Logic Handbook: Theory, Implementation and Applications, 2nd ed. Cambridge, U.K.: Cambridge Univ. Press, 2010.
    [33]
    M. Chekol, J. Huber, C. Meilicke, and H. Stuckenschmidt, “Markov logic networks with numerical constraints,” in Proc. Eur. Conf. Artif. Intell., 2016, pp. 1–9.
    [34]
    G. Singla, D. J. Cook, and M. Schmitter-Edgecombe, “Tracking activities in complex settings using smart environment technologies,” Int. J. Biosci. Psychiatry Technol., vol. 1, no. 1, pp. 25–35, 2009.
    [35]
    O. Banos, J.-M. Galvez, M. Damas, H. Pomares, and I. Rojas, “Window size impact in human activity recognition,” Sensors, vol. 14, no. 4, pp. 6474–6499, 2014.
    [36]
    N. C. Krishnan and D. J. Cook, “Activity recognition on streaming sensor data,” Pervasive Mobile Comput., vol. 10, pp. 138–154, 2014.
    [37]
    G. Civitarese, D. Riboni, C. Bettini, Z. H. Janjua, and R. Helaoui, “NECTAR: Knowledge-based collaborative active learning for activity recognition,” in Proc. IEEE Int. Conf. Pervasive Comput. Commun., 2018, pp. 1–10.

    Index Terms

    1. GM bets big on batteries: A new $2.3 billion plant cranks out Ultium cells to power a future line of electric vehicles
      Index terms have been assigned to the content through auto-classification.

      Comments

      Information & Contributors

      Information

      Published In

      cover image IEEE Spectrum
      IEEE Spectrum  Volume 57, Issue 12
      Dec. 2020
      56 pages

      Publisher

      IEEE Press

      Publication History

      Published: 01 December 2020

      Qualifiers

      • Research-article

      Contributors

      Other Metrics

      Bibliometrics & Citations

      Bibliometrics

      Article Metrics

      • 0
        Total Citations
      • 0
        Total Downloads
      • Downloads (Last 12 months)0
      • Downloads (Last 6 weeks)0
      Reflects downloads up to 12 Aug 2024

      Other Metrics

      Citations

      View Options

      View options

      Get Access

      Login options

      Media

      Figures

      Other

      Tables

      Share

      Share

      Share this Publication link

      Share on social media