Scalar replacement of array references Data-cache optimizations r 1 Procedure integration Tail-call optimization, including tail-recursion elimination Scalar replacement of aggregates Sparse conditional constant propagation... more
Scalar replacement of array references Data-cache optimizations r 1 Procedure integration Tail-call optimization, including tail-recursion elimination Scalar replacement of aggregates Sparse conditional constant propagation Interprocedural constant propagation Procedure ...
Unified wearable data has resulted in a slew of unproven benefits, including the lookaside buffer and DNS. After years of unproven research into sсаtter/gather I/O, we аrgue the analysis of Iv7. Our latest algorithm for reрliсаted... more
Unified wearable data has resulted in a slew of unproven benefits, including the lookaside buffer and DNS. After years of unproven research into sсаtter/gather I/O, we аrgue the analysis of Iv7. Our latest algorithm for reрliсаted information, Shaman, is a solution to both problems.
In Compiler Design courses, students learn how a program written in high level programming language and designed for humans understanding is systematically converted into low level assembly language understood by machines, through... more
In Compiler Design courses, students learn how a program written in high level programming language and designed for humans understanding is systematically converted into low level assembly language understood by machines, through different representations. This article presents the design, educative characteristics and possibilities of a modular and didactic compiler for a Pascal-like programming minilanguage that is super-set of Niklaus Wirth's PL/0. The main feature is that it implements the compiling phases in such a way that the information delivered to each next one may be reflected as an XML document, which can be studied separately. It is also shown that its design is suitable for being included as learning tool into compiler design courses. It is possible to implement a compiler in a high-level language like Python.
Graphics Processing Units (GPUs) are specialized coprocessors that were initially conceived for the purpose of accelerating vector operations, such as graphics rendering. Writing and configuring efficient algorithms for GPU devices is... more
Graphics Processing Units (GPUs) are specialized coprocessors that were initially conceived for the purpose of accelerating vector operations, such as graphics rendering. Writing and configuring efficient algorithms for GPU devices is still a hard problem. The Algorithm Selection Problem consists of finding a combination of algorithms, or a configuration of an algorithm, that optimizes the solution of a given problem instance or set of instances. An auto-tuner is a program solves the Algorithm Selection Problem automatically. In this paper we implement an autotuner for the compilation flags of GPU algorithms, using the OpenTuner framework. The autotuner produces a set of compilation flags that aims to optimize the time to solve a given problem for a specific GPU device. We analyse the performance gains of tuning the compilation flags for heterogeneous GPU algorithms across three different GPU devices. We show that it is possible to gain performance by automatically and empirically selecting a set of compilation flags for the same GPU algorithm in different devices. In one of the experimental settings we were able to achieve a 30% speedup in comparison with the compiler high-level optimization options.
In this study, the effect of Scratch environment in teaching algorithm in elementary school 6th grade Information and Communication Technologies course was examined. The research method was experimental method. Control group,... more
In this study, the effect of Scratch environment in teaching algorithm in elementary school 6th grade Information and Communication Technologies course was examined. The research method was experimental method. Control group, pretest-posttest design of experimental research method and a convenience sample consisting of 60 6th grade students were used. The research instrument was achievement test to determine the effect of Scratch on learning algorithm. During the implementation process experiment group studied using
Scratch and control group studied with traditional methods. The data was analyzed using independent-samples
t-test, paired-samples t-test and ANCOVA statistics. According to findings there is no statically significant difference between posttest achievement scores of experiment and control groups. Similarly, In terms of gender there isn’t a statically significant difference between posttest scores of experiment and control groups.
Comment le vieux terme "code", typique du "calcul électronique" des années 1950, a peu à peu été remplacé dans le vocabulaire des informaticiens par le mot « langage » – et que signifia ce changement terminologique ? Le présent article... more
Comment le vieux terme "code", typique du "calcul électronique" des années 1950, a peu à peu été remplacé dans le vocabulaire des informaticiens par le mot « langage » – et que signifia ce changement terminologique ? Le présent article évoque les étapes de cette évolution dans le petit milieu des proto-informaticiens français, puis sur la scène internationale.
—Compiler construction primarily comprises of some standard phases such as lexical analysis, syntax analysis, semantic analysis, intermediate code generation, code optimization and target code generation but due to the improvement in... more
—Compiler construction primarily comprises of some standard phases such as lexical analysis, syntax analysis, semantic analysis, intermediate code generation, code optimization and target code generation but due to the improvement in computer architectural designs, there is a need to improve on the code size, instruction execution speed, etc. Hence, today better and more efficient compiler analysis and optimization techniques such as advanced dataflow analysis, leaf function optimization, cross-linking optimizations, etc. are adopted to meet with the latest trend in hardware technology and generate better target codes for recent machines.
The prime objective of the proposed study is to determine the induction of Greibach Normal Form (GNF) in Arithmetic Expression Grammars to improve the processing speed of conventional LL(1) parser. Conventional arithmetic expression... more
The prime objective of the proposed study is to determine the induction of Greibach Normal Form (GNF) in Arithmetic Expression Grammars to improve the processing speed of conventional LL(1) parser. Conventional arithmetic expression grammar and its equivalent LL(1) is used in the study which is converted. A transformation method is defined which converts the selected grammar into a Greibach normal form that is further converted into a GNF based parser through a method proposed in the study. These two parsers are analyzed by considering 399 cases of arithmetic expressions. During statistical analysis, the results are initially examined with the Kolmogorov-Smirnov and Shapiro-Wilk test. The statistical significance of the proposed method is evaluated with the Mann-Whitney U test. The study described that GNF based LL(1) parser for arithmetic take fewer steps than conventional LL(1) grammar. The ranks and asymptotic significance depict that the GNF based LL(1) method is significant than the conventional LL(1) approach. The study adds to the knowledge of parsers itself, parser expression grammars (PEG's), LL(1) grammars, Greibach Normal Form (GNF) induced grammar structure, and the induction of Arithmetic PEG's LL(1) to GNF based grammar.
A brief comparison between high and low level languages. Python and COBOL are selected as examples and their differences and applications are explored.
Sistem Al-Khowarizmi merupakan salah satu dari dua bagian kompilator, yakni Analysis. Al-Khowarizmi tidak memiliki bagian Synthetis, sehingga untuk mendapatkan kode objeknya (executable file), Al-Khowarizmi harus menggunakan kompilator... more
Sistem Al-Khowarizmi merupakan salah satu dari dua bagian kompilator, yakni Analysis. Al-Khowarizmi tidak memiliki bagian Synthetis, sehingga untuk mendapatkan kode objeknya (executable file), Al-Khowarizmi harus menggunakan kompilator tertentu. Terdapat tiga proses utama dalam sistem Al-Khowarizmi, 1) Proses Mengkonversi Kode, 2) Proses Menjalankan Program, 3) Mencetak dokumen yang meliputi source code, converted code, dan token source code berikut atribut dan perubahannya. Dengan Al-Khowarizmi para pemula dapat melakukan pemrograman mendasar terhadap komputer dengan menggunakan bahasa pemrograman tingkat tinggi mendasar yang perintah pemrogramannya berbahasa Indonesia. Selain itu, para pemula dapat mengetahui kesalahan pernyataan pemrograman dalam program yang dibuatnya dengan membaca pesan kesalahan berbahasa Indonesia yang dibuat oleh Al-Khowarizmi. Selain itu, Al-Khowarizmi menyediakan fasilitas hubungan langsung dengan kompilator yang mengakomodasi converted code, sehingga converted code tidak perlu dikompilasi di luar lingkungan Al-Khowarizmi. Al-Khowarizmi juga memberikan dokumentasi print out yang berisi source code, converted code, dan token source code berikut atribut dan perubahannya. Dokument lainnya berbentuk file yang meliputi source code file, converted code file, dan executable file.
A personal archive of material related to formal methods has been deposited at Swansea University by the author in 2018. This paper documents the contents of the archive and includes associated publications. The archival material forms... more
A personal archive of material related to formal methods has been deposited at Swansea University by the author in 2018. This paper documents the contents of the archive and includes associated publications. The archival material forms part of a larger History of Computing Collection founded by Prof. John Tucker at Swansea in 2007 and held at the University. It is hoped that this paper can aid future archivists with placing the material in context.
This paper is based on research results achieved by a collaboration between Ericsson Hungary Ltd. and the Large Scale Testing Research Lab of Eötvös Loránd University, Budapest. We present design issues and empirical observations on... more
This paper is based on research results achieved by a collaboration between Ericsson Hungary Ltd. and the Large Scale Testing Research Lab of Eötvös Loránd University, Budapest. We present design issues and empirical observations on extending an existing industrial toolset with a new intermediate language. Context: The industry partner’s toolset is using C/C++ as an inter-mediate language, providing good execution performance, but “somewhat long” build times, offering a sub-optimal experience for users. Objective: In cooperation with our industry partner our task was to perform an experiment with Java as a different intermediate language and evaluate results, to see if this could improve build times. Method: We extended the mentioned toolset to use Java as an inter-mediate language. Results: Our measurements show that using Java as an intermediate language improves build times significantly. We also found that, while the runtime performance of C/C++ is better in some situations, Java, at least in our testing scenarios, can be a viable alternative to improve developer productivity. Our contribution is unique in the sense that both ways of building and execution can use the same source code as input, written in the same language, generate intermediate codes with the same high-level structure, compile into executables that are configured using the same files, run on the same machine, show the same behaviour and generate the same logs. Conclusions: We created an alternative build pipeline that might enhance the productivity of our industry partner’s test developers by reducing the length of builds during their daily work.
There are many domain libraries, but despite the performance benefits of compilation, domain-specific languages are comparatively rare due to the high cost of implementing an optimizing compiler. We propose commensal compilation, a new... more
There are many domain libraries, but despite the performance benefits of compilation, domain-specific languages are comparatively rare due to the high cost of implementing an optimizing compiler. We propose commensal compilation, a new strategy for compiling embedded domain-specific languages by reusing the massive investment in modern language virtual machine platforms. Commensal compilers use the host language's front-end, use host platform APIs that enable back-end optimizations by the host platform JIT, and use an autotuner for optimization selection. The cost of implementing a commensal compiler is only the cost of implementing the domain-specific optimizations. We demonstrate the concept by implementing a commensal compiler for the stream programming language StreamJIT atop the Java platform. Our compiler achieves performance 2.8 times better than the StreamIt native code (via GCC) compiler with considerably less implementation effort.
Top-down (LL) context-sensitive parsers with integrated synthesis and use of attributes are easy to expressin functional programming languages, but the elegant functional programming model can also serve as an exact prototype for a more... more
Top-down (LL) context-sensitive parsers with integrated synthesis and use of attributes are easy to expressin functional programming languages, but the elegant functional programming model can also serve as an exact prototype for a more efficient implementation of the technology in ANSI C. The result is a compiler-compiler that takes unlimited lookahead and backtracking, the extended BNF notation, and parameterized grammars with (higher-order) meta-parameters to the world of C programming. This article reports on the utility in question three years after public release. Precc generates standard ANSI C and is ‘plug compatible’ with lex-generated lexical analyzers prepared for the UNIX yacc compiler-compiler. In contrast to yacc, however, the generated code is modular, which allows parts of scripts to be compiled separately and linked together incrementally. The constructed code is relatively efficient, as is demonstrated by the example Occam parser treated in depth here, but the main advantages we claim are ease of use, separation of specification and implementation concerns, and maintainability.
Distributed systems are increasingly being leveraged for Deep Learning with different architectural frameworks available e.g. Convolution Neural Network, TensorFlow; each framework having their salient features. This study will focus on... more
Distributed systems are increasingly being leveraged for Deep Learning with different architectural frameworks available e.g. Convolution Neural Network, TensorFlow; each framework having their salient features. This study will focus on Image Recognition which is a core discipline within Big Data by setting up two frameworks and performing comparative tests. The paper will conclude with results from the comparative analysis and recommendations. The project will seek to implement image recognition technology with Deep Learning under a distributed system hosted by Spark with increased accuracy of image recognition with higher performance than is presently available.
ProCoS aims to improve dependability, reduce timescales and cut development costs of construction for embedded systems, particularly in real-time and safety-critical applications. It uses and develops the results of basic research into... more
ProCoS aims to improve dependability, reduce timescales and cut development costs of construction for embedded systems, particularly in real-time and safety-critical applications. It uses and develops the results of basic research into fundamental properties of interactive systems. It aims to provide a scientific basis for future standards of practice in the development of embedded systems, ensuring correctness of all stages in the development, from elicitation and analysis of requirements through design and implementation of programs down
to compilation and execution on verified hardware.
With the increasing sophistication of attack techniques and scenarios, appropriate automated decision-making systems should be developed. This paper defines a new security language allowing to cope with attack scenarios through the... more
With the increasing sophistication of attack techniques and scenarios, appropriate automated decision-making systems should be developed. This paper defines a new security language allowing to cope with attack scenarios through the representation of both attacks and security solutions in a single syntactic framework. A subsequent semantic analysis has also been introduced. To implement this reasoning, we introduce a security compiler-like architecture that comes up with substantial novelties with regard to traditional compilers (used in software engineering). The most important innovations are the computation of abstract attack/countermeasure specifications and the resolution of the Fundamental Security Equation (FSE). Unlike existing compilation schemes, our approach aims at building a relational specification of the attack through a traversal of its semantic tree. The security solution(s) corresponding the attack of interest is (are) then found by solving the FSE, in the relational algebra of attacks and decisions. Concrete examples have been analyzed in order to highlight the potential of the proposed Relational Algebra-based Security language, called ReAlSec.
ProCoS aims to improve dependability, reduce timescales and cut development costs of construction for embedded systems, particularly in real-time and safety-critical applications. It uses and develops the results of basic research into... more
ProCoS aims to improve dependability, reduce timescales and cut development costs of construction for embedded systems, particularly in real-time and safety-critical applications. It uses and develops the results of basic research into fundamental properties of interactive systems. It aims to provide a scientific basis for future standards of practice in the development of embedded systems, ensuring correctness of all stages in the development, from elicitation and analysis of requirements through design and implementation of ...
800x600 In this study, the effect of Scratch environment in teaching algorithm in elementary school 6th grade Information and Communication Technologies course was examined. The research method was experimental method. Control group,... more
800x600 In this study, the effect of Scratch environment in teaching algorithm in elementary school 6th grade Information and Communication Technologies course was examined. The research method was experimental method. Control group, pretest-posttest design of experimental research method and a convenience sample consisting of 60 6th grade students were used. The research instrument was achievement test to determine the effect of Scratch on learning algorithm. During the implementation process experiment group studied using Scratch and control group studied with traditional methods. The data was analyzed using independent-samples t-test, paired-samples t-test and ANCOVA statistics. According to findings there is no statically significant difference between posttest achievement scores of experiment and control groups. Similarly, In terms of gender there isn’t a statically significant difference between posttest scores of experiment and control groups. Keywords: distributed learning e...
International Journal on Soft Computing, Artificial Intelligence and Applications (IJSCAI) is an open access peer-reviewed journal that provides an excellent international forum for sharing knowledge and results in theory, methodology and... more
International Journal on Soft Computing, Artificial Intelligence and Applications (IJSCAI) is an open access peer-reviewed journal that provides an excellent international forum for sharing knowledge and results in theory, methodology and applications of Artificial Intelligence, Soft Computing. The Journal looks for significant contributions to all major fields of the Artificial Intelligence, Soft Computing in theoretical and practical aspects. The aim of the Journal is to provide a platform to the researchers and practitioners from both academia as well as industry to meet and share cutting-edge development in the field. Authors are solicited to contribute to the journal by submitting articles that illustrate research results, projects, surveying works and industrial experiences that describe significant advances in the areas of Database management systems.
6th International Conference on Computer Science, Engineering and Applications (CSEA 2020) will provide an excellent international forum for sharing knowledge and results in theory, methodology and applications of Computer Science,... more
6th International Conference on Computer Science, Engineering and Applications (CSEA 2020) will provide an excellent international forum for sharing knowledge and results in theory, methodology and applications of Computer Science, Engineering and Applications. The Conference looks for significant contributions to all major fields of the Computer Science, Engineering and Information Technology in theoretical and practical aspects
The International Journal of Artificial Intelligence & Applications (IJAIA) is a bi monthly open access peer-reviewed journal that publishes articles which contribute new results in all areas of the Artificial Intelligence & Applications... more
The International Journal of Artificial Intelligence & Applications (IJAIA) is a bi monthly open access peer-reviewed journal that publishes articles which contribute new results in all areas of the Artificial Intelligence & Applications (IJAIA). It is an international journal intended for professionals and researchers in all fields of AI for researchers, programmers, and software and hardware manufacturers. The journal also aims to publish new attempts in the form of special issues on emerging areas in Artificial Intelligence and applications.
This paper generalizes an algebraic method for the design of a correct compiler to tackle speci fication and veri fication of an optimized compiler. The main optimization issues of concern here include the use of existing contents of... more
This paper generalizes an algebraic method for the design of a correct compiler to tackle specification and verification of an optimized compiler. The main optimization issues of concern here include the use of existing contents of registers where possible and the identification of common expressions. A register table is
introduced in the compiling specification predicates to map each register to an expression whose value is held by it. We define different kinds of predicates to specify compilation of programs, expressions and Boolean tests. A set of theorems relating to these predicates, acting as a correct compiling specification, are presented and an example proof within the refinement algebra of the programming language is given. Based on these theorems, a prototype compiler in Prolog is produced.
While a compiler produces low-level object code from high-level source code, a decompiler produces high-level code from low-level code and has applications in the testing and validation of safety-critical software. The decompilation of an... more
While a compiler produces low-level object code from high-level source code, a decompiler produces high-level code from low-level code and has applications in the testing and validation of safety-critical software. The decompilation of an object code provides an independent demonstration of correctness that is hard to better for industrial purposes (an alternative is to prove the compiler correct). But, although compiler compilers are in common use in the software industry, a decompiler compiler is much more unusual.It turns out that a data type specification for a programming-language grammar can be remolded into a functional program that enumerates all of the abstract syntax trees of the grammar. This observation is the springboard for a general method for compiling decompilers from the specifications of (non-optimizing) compilers.This paper deals with methods and theory, together with an application of the technique. The correctness of a decompiler generated from a simple occam-like compiler specification is demonstrated. The basic problem of enumerating the syntax trees of grammars, and then stopping, is shown to have no recursive solution, but methods of abstract interpretation can be used to guarantee the adequacy and completeness of our technique in practical instances, including the decompiler for the language presented here.
A compiler may be speci ed by a description of how each construct of the source language is translated into a sequence of object code instructions. It is possible to produce a compiler prototype almost directly from this speci fication in... more
A compiler may be specied by a description of how each construct of the source language is translated into a sequence of object code instructions. It is possible to produce a compiler prototype almost directly from this specification in the form of a logic program. This defines a relation between allowed high-level and low-level program constructs. Normally a high-level program is supplied as input to a compiler and object code is returned. Because of the declarative nature of a logic program,
it is possible for the object code to be supplied and the allowed high-level programs returned, resulting in a decompiler, provided enough information is available in the object code. This paper discusses the problems of adopting such an approach in
practice. A simple compiler and decompiler are presented in full as an example in the logic programming language Prolog, together with some sample output. The possible benefits of using constraint logic programming are also considered. Potential applications include reverse engineering in the software maintenance process, verification of safety-critical object code, quality assessment of code and program debugging tools.
This chapter presents a provably correct compilation scheme that converts a program into a network of abstract components that interact with each other by exchanging request and acknowledgement signals. We provide a systematic and... more
This chapter presents a provably correct compilation scheme that converts a program into a network of abstract components that interact with each other by exchanging request and acknowledgement signals. We provide a systematic and modular
technique for correctly realizing the abstract components in
hardware device, and use a standard programming language to
describe both algorithms and circuits. The resulting circuitry, which behaves according to the program, has the same structure as the program. The circuit logic is asynchronous, with no global clock.
We present a new abstract machine, called DCESH, which models the execution of higher-order programs running in distributed architectures. DCESH implements a native general remote higher-order function call across node boundaries. It is a... more
We present a new abstract machine, called DCESH, which models the execution of higher-order programs running in distributed architectures. DCESH implements a native general remote higher-order function call across node boundaries. It is a modernised version of SECD enriched with specialised communication features required for implementing the remote procedure call mechanism. The key correctness result is that the termination behaviour of the remote procedure call is indistinguishable (bisimilar) to that of a local call. The correctness proofs and the requisite definitions for DCESH and other related abstract machines are formalised using Agda. We also formalise a generic transactional mechanism for transparently handling failure in DCESHs.
We use the DCESH as a target architecture for compiling a conventional call-by-value functional language ("Floskel") which can be annotated with node information. Conventional benchmarks show that the single-node performance of Floskel is comparable to that of OCaml, a semantically similar language, and that distribution overheads are not excessive.
The goal of the Provably Correct Systems project (ProCoS) is to develop a mathematical basis for development of embedded, real-time, computer systems. This survey paper introduces novel speci fication languages and veri fication... more
The goal of the Provably Correct Systems project (ProCoS) is to develop a mathematical basis for development of embedded, real-time, computer systems. This survey paper introduces novel speci fication languages and veri fication techniques for four levels of development: Requirements defi nition and design; Program speci fications and their transformation to parallel programs; Compilation of programs to hardware; and Compilation of real-time programs to conventional processors.
International journal of Programming Languages and applications is dedicated to the distribution of research results in the areas of Programming Languages and applications. Authors are solicited to contribute to the journal by submitting... more
International journal of Programming Languages and applications is dedicated to the distribution of research results in the areas of Programming Languages and applications.
Authors are solicited to contribute to the journal by submitting articles that illustrate research results, projects, surveying works and industrial experiences that describe significant advances in the areas of Programming Languages.
Since the advent of modern programming language compilers whereby a set of human readable instructions are syntactically and semantically parsed and then translated and optimized to a binary format readable by a machine or an interpreter,... more
Since the advent of modern programming language compilers whereby a set of human readable instructions are syntactically and semantically parsed and then translated and optimized to a binary format readable by a machine or an interpreter, there has been a need for the reversal of the process which is generally known as decompilation. Yet wide gaps of knowledge have remained in decompilation given that it is can be modeled as an identical process to that performed by a compiler except the input and output take on a different appearance. The Von Neumann architecture which modern computers are still based on, requires that the code and data remain in memory and operate on that which is contained within that memory as well yielding the possibility for code to modify itself which is merely a form of compression or obfuscation of the original code. By analyzing self-modifying code, and its implications for declarative programming languages as well as temporally, a model for decompilation can be described which will generalize and completely match the problem description handling the most complicated and generalized situations which are possible.
In this paper we review main ideas mentioned in several other papers which talk about optimization techniques used by compilers. Here we focus on loop unrolling technique and its effect on power consumption, energy usage and also its... more
In this paper we review main ideas mentioned in several other papers which talk about optimization techniques used by compilers. Here we focus on loop unrolling technique and its effect on power consumption, energy usage and also its impact on program speed up by achieving ILP (Instruction-level parallelism). Concentrating on superscalar processors, we discuss the idea of generalized loop unrolling presented by J.C. Hang and T. Leng and then we present a new method to traverse a linked list to get a better result of loop unrolling in that case. After that we mention the results of some experiments carried out on a Pentium 4 processor (as an instance of super scalar architecture). Furthermore, the results of some other experiments on supercomputer (the Alliat FX/2800 System) containing superscalar node processors would be mentioned. These experiments show that loop unrolling has a slight measurable effect on energy usage as well as power consumption. But it could be an effective way for program speed up.
This paper is based on research results achieved by a collaboration between Ericsson Hungary Ltd. and the Large Scale Testing Research Lab of Eötvös Loránd University, Budapest. We present design issues and empirical observations on... more
This paper is based on research results achieved by a collaboration between Ericsson Hungary Ltd. and the Large Scale Testing Research Lab of Eötvös Loránd University, Budapest. We present design issues and empirical observations on extending an existing industrial toolset with a new intermediate language. Context: The industry partner’s toolset is using C/C++ as an inter-mediate language, providing good execution performance, but “somewhat long” build times, offering a sub-optimal experience for users. Objective: In cooperation with our industry partner our task was to perform an experiment with Java as a different intermediate language and evaluate results, to see if this could improve build times. Method: We extended the mentioned toolset to use Java as an inter-mediate language. Results: Our measurements show that using Java as an intermediate language improves build times significantly. We also found that, while the runtime performance of C/C++ is better in some situations, Jav...