Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
research-article

Finite Precision Error Analysis of Neural Network Hardware Implementations

Published: 01 March 1993 Publication History

Abstract

Through parallel processing, low precision fixed point hardware can be used to build a very high speed neural network computing engine where the low precision results in a drastic reduction in system cost. The reduced silicon area required to implement a single processing unit is taken advantage of by implementing multiple processing units on a single piece of silicon and operating them in parallel. The important question which arises is how much precision is required to implement neural network algorithms on this low precision hardware. A theoretical analysis of error due to finite precision computation was undertaken to determine the necessary precision for successful forward retrieving and back-propagation learning in a multilayer perceptron. This analysis can easily be further extended to provide a general finite precision analysis technique by which most neural network algorithms under any set of hardware constraints may be evaluated.

References

[1]
{1} D. Hammerstrom, "A VLSI architecture for high-performance, low cost, on-chip learning," in Proc. Int. Joint Conf. Neural Networks, San Diego, CA, June 1990, pp. II:537-543.
[2]
{2} J. L. Holt and J. N. Hwang, "Finite precision error analysis for neural network hardware implementation," in Proc. Int. Joint Conf. Neural Networks, Seattle, WA, July 1991, pp. I:519-526.
[3]
{3} S. M. Pizer with V. L. Wallace, To Compute Numerically, Concepts and Strategies. Boston, MA: Little, Brown and Co., 1983.
[4]
{4} J. N. Hwang, J. A. Vlontzos, and S. Y. Kung, "A systolic neural network architecture for hidden Markov models," IEEE Trans. Acoust., Speech, Signal Processing, vol. 37, pp. 1967-1979, Dec. 1989.
[5]
{5} J. N. Hwang, J. A. Vlontzos, and S. Y. Kung, "A unified architecture for artificial neural networks," J. Parallel Distributed Comput., pp. 358-387, Apr. 1989.
[6]
{6} A. Papoulis, Probability, Random Variables, and Stochastic Processes. New York: McGraw-Hill, 1984.
[7]
{7} P. J. Werbos, "Beyond regression: New tools for prediction and analysis in the behavior science," Ph.D. dissertation, Harvard Univ., Cambridge, MA, 1974.
[8]
{8} D. E. Rumelhart, G. E. Hinton, and R. J. Williams, "Learning internal representations by error propagation," in Parallel Distributed Processing (PDP): Exploration in the Microstructure of Cognition (Vol. 1), Cambridge, MA: M.I.T. Press, 1986, ch. 8.
[9]
{9} J. N. Hwang and P. S. Lewis, "From nonlinear optimization to neural network learning," in Proc. 24th Asilomar Conf. Signals, Syst., & Comput., Pacific Grove, CA, Nov. 1990, pp. 985-989.
[10]
{10} T. E. Baker, "Implementation limits for artificial neural networks," Master thesis, Dep. Comput. Sci. and Eng., Oregon Graduate Institute of Science and Technology, 1990.
[11]
{11} P. S. Lewis and J. N. Hwang, "Recursive least squares learning algorithms for neural networks," in Proc. SPIE's Int. Symp. Opt. and Optoelectron. Appl. Sci. and Eng., San Diego, CA, July 1990, pp. 28-39.
[12]
{12} J. L. Holt and T. E. Baker, "Back propagation simulations using limited precision calculations," in Proc. Int. Joint Conf. Neural Networks, Seattle, WA, July 1991, pp. II: 121-126.

Cited By

View all
  • (2024)A Machine Learning-Based Toolbox for P4 Programmable Data-PlanesIEEE Transactions on Network and Service Management10.1109/TNSM.2024.340207421:4(4450-4465)Online publication date: 1-Aug-2024
  • (2023)Energy-Efficient Hardware Implementation of Fully Connected Artificial Neural Networks Using Approximate Arithmetic BlocksCircuits, Systems, and Signal Processing10.1007/s00034-023-02363-w42:9(5428-5452)Online publication date: 24-Apr-2023
  • (2022)A reliability concern on photonic neural networksProceedings of the 2022 Conference & Exhibition on Design, Automation & Test in Europe10.5555/3539845.3540089(1059-1064)Online publication date: 14-Mar-2022
  • Show More Cited By

Index Terms

  1. Finite Precision Error Analysis of Neural Network Hardware Implementations
    Index terms have been assigned to the content through auto-classification.

    Recommendations

    Comments

    Information & Contributors

    Information

    Published In

    cover image IEEE Transactions on Computers
    IEEE Transactions on Computers  Volume 42, Issue 3
    March 1993
    130 pages

    Publisher

    IEEE Computer Society

    United States

    Publication History

    Published: 01 March 1993

    Author Tags

    1. back-propagation learning
    2. error analysis
    3. feedforward neural nets
    4. finite precision computation
    5. forward retrieving
    6. low precision
    7. multilayer perceptron
    8. neural chips.
    9. neural network algorithms
    10. neural network hardware
    11. parallel processing
    12. silicon area
    13. system cost

    Qualifiers

    • Research-article

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)0
    • Downloads (Last 6 weeks)0
    Reflects downloads up to 21 Sep 2024

    Other Metrics

    Citations

    Cited By

    View all
    • (2024)A Machine Learning-Based Toolbox for P4 Programmable Data-PlanesIEEE Transactions on Network and Service Management10.1109/TNSM.2024.340207421:4(4450-4465)Online publication date: 1-Aug-2024
    • (2023)Energy-Efficient Hardware Implementation of Fully Connected Artificial Neural Networks Using Approximate Arithmetic BlocksCircuits, Systems, and Signal Processing10.1007/s00034-023-02363-w42:9(5428-5452)Online publication date: 24-Apr-2023
    • (2022)A reliability concern on photonic neural networksProceedings of the 2022 Conference & Exhibition on Design, Automation & Test in Europe10.5555/3539845.3540089(1059-1064)Online publication date: 14-Mar-2022
    • (2022)AI on the edge: a comprehensive reviewArtificial Intelligence Review10.1007/s10462-022-10141-455:8(6125-6183)Online publication date: 1-Dec-2022
    • (2021)Statistical robustness of Markov chain Monte Carlo acceleratorsProceedings of the 26th ACM International Conference on Architectural Support for Programming Languages and Operating Systems10.1145/3445814.3446697(959-974)Online publication date: 19-Apr-2021
    • (2020)Up or down? adaptive rounding for post-training quantizationProceedings of the 37th International Conference on Machine Learning10.5555/3524938.3525605(7197-7206)Online publication date: 13-Jul-2020
    • (2020)Addressing Irregularity in Sparse Neural Networks Through a Cooperative Software/Hardware ApproachIEEE Transactions on Computers10.1109/TC.2020.297847569:7(968-985)Online publication date: 1-Jul-2020
    • (2020)Accelerating Sparse Convolutional Neural Networks Based on Dataflow ArchitectureAlgorithms and Architectures for Parallel Processing10.1007/978-3-030-60239-0_2(14-31)Online publication date: 2-Oct-2020
    • (2019)DWMAccACM Transactions on Embedded Computing Systems10.1145/335819918:5s(1-19)Online publication date: 8-Oct-2019
    • (2019)Neuro.ZEROProceedings of the 17th Conference on Embedded Networked Sensor Systems10.1145/3356250.3360030(138-152)Online publication date: 10-Nov-2019
    • Show More Cited By

    View Options

    View options

    Media

    Figures

    Other

    Tables

    Share

    Share

    Share this Publication link

    Share on social media