A code transaction modifies a set of code files and runs a set of tests to make sure that it does not break any of the existing features. But how do we choose the target set of tests? There could be 1000s of tests. Do we run all those... more
A code transaction modifies a set of code files and runs a set of tests to make sure that it does not break any of the existing features. But how do we choose the target set of tests? There could be 1000s of tests. Do we run all those tests? In that case, are we not wasting resources, and delaying the whole process of merging a code change? Can we choose a static set of tests? In that case, we have a chance of missing a set of tests which may be related to the code changes. How do we find out the optimal set of tests for a given code change-is the goal which we are trying to solve using the Recommendation System.
Data mining is the process of analyzing data to find information knowledge discovery in databases. One of the techniques of Data Mining is Classification. Neural network has emerged as an algorithm for classification. In this research,... more
Data mining is the process of analyzing data to find information knowledge discovery in databases. One of the techniques of Data Mining is Classification. Neural network has emerged as an algorithm for classification. In this research, backpropagation neural network algorithm is adapted for the text mining to classify advisor lecturers based on the student's final project. Generally, small neural network structures are faster when deployed. The use of SVD and Weight Initialisation for optimizing the neural network structures was proposed in. SVD is used to identify and eliminate redundant hidden nodes. Moreover, the optimal neural network size is highly dependent on the weight of initialisation. It starts from a fairly large network and dynamically removes unimportant connections. The experiment was done 5 times for each testing scenarios. The results showed that neural network algorithm with prune method and a lot of training data produces better result. It showed the accuracy is amount of 85%, while the precision is amount of 90.63%, while recall is amount of 85%, while f-measure is amount of 87.72%. 1. Pendahuluan 1.1 Latar Belakang Tugas akhir merupakan karya ilmiah mahasiswa sebagai salah satu syarat untuk mendapatkan gelar sarjana. Dalam menyusun tugas akhir, mahasiswa membutuhkan dosen pembimbing sebagai tempat konsultasi dalam menyelesaikan tugas akhir tersebut. Dosen pembimbing sebaiknya merupakan orang yang menguasai bidang yang sesuai dengan tugas akhir mahasiswa sehingga proses bimbingan dapat berjalan dengan baik. Dalam proses penentuan dosen pembimbing di Jurusan Teknik Informatika UMM masih dilakukan secara manual dengan mengandalkan pengetahuan pribadi tentang keahlian dosen yang dibutuhkan. Oleh karena itu dibutuhkan analisis tentang keahlian dosen yang sesuai dengan topik tugas akhir mahasiswa. Pada penelitian tugas akhir ini, peneliti memanfaatkan penggunaan data mining berdasarkan pengalaman dosen yang telah membimbing mahasiswa dengan menggunakan parameter variabel topik, judul, serta keyword abstrak tugas akhir. Dengan mengenali pola dari variable-variabel yang menggambarkan tugas akhir yang sudah dibimbing oleh dosen dapat dibuat sebuah aplikasi untuk menentukan dosen pembimbing tugas akhir dengan teknik klasifikasi. Klasifikasi itu mengenali pola yang menggambarkan kelompok objek yang sudah diklasifikasi dan menyimpulkan
Nowadays, recommendation systems are used successfully to provide items (example: movies, music, books, news, images) tailored to user preferences. Amongst the approaches existing to recommend adequate content, we use the collaborative... more
Nowadays, recommendation systems are used successfully to provide items (example: movies, music, books, news, images) tailored to user preferences. Amongst the approaches existing to recommend adequate content, we use the collaborative filtering approach of finding the information that satisfies the user by using the reviews of other users. These reviews are stored in matrices that their sizes increase exponentially to predict whether an item is relevant or not. The evaluation shows that these systems provide unsatisfactory recommendations because of what we call the cold start factor. Our objective is to apply a hybrid approach to improve the quality of our recommendation system. The benefit of this approach is the fact that it does not require a new algorithm for calculating the predictions. We are going to apply two algorithms: k-nearest neighbours (KNN) and the matrix factorization algorithm of collaborative filtering which are based on the method of (singular-value-decomposition). Our combined model has a very high precision and the experiments show that our method can achieve better results.
In this guide, we introduce researchers in the behavioral sciences in general and MIS in particular to text analysis as done with latent semantic analysis (LSA). The guide contains hands-on annotated code samples in R that walk the reader... more
In this guide, we introduce researchers in the behavioral sciences in general and MIS in particular to text analysis as done with latent semantic analysis (LSA). The guide contains hands-on annotated code samples in R that walk the reader through a typical process of acquiring relevant texts, creating a semantic space out of them, and then projecting words, phrase, or documents onto that semantic space to calculate their lexical similarities. R is an open source, popular programming language with extensive statistical libraries. We introduce LSA as a concept, discuss the process of preparing the data, and note its potential and limitations. We demonstrate this process through a sequence of annotated code examples: we start with a study of online reviews that extracts lexical insight about trust. That R code applies singular value decomposition (SVD). The guide next demonstrates a realistically large data analysis of Stack Exchange, a popular Q&A site for programmers. That R code applies an alternative sparse SVD method. All the code and data are available on github.com.
This issue (DOI: 10.13140/RG.2.2.10242.07361) includes the following articles; P1151637518, Author="Géraud Azehoun-Pazou and Kokou Assogba", Title="Automatic Recognition of Lesion-like Regions in Black Skin Medical Images" P1151636515,... more
This issue (DOI: 10.13140/RG.2.2.10242.07361) includes the following articles; P1151637518, Author="Géraud Azehoun-Pazou and Kokou Assogba", Title="Automatic Recognition of Lesion-like Regions in Black Skin Medical Images" P1151636515, Author="Khalid A. Al-Afandy and EL-Sayed M. EL-Rabaie and Fathi E. Abd El-Samie and Osama S. Faragallah and Ahmed ELmhalaway and A. M. Shehata", Title="Robust Color Image Watermarking Technique Using DWT and Homomorphic based SVD" P1151638519, Author="TALLURI. SUNIL KUMAR and T.V.RAJINIKANTH and B. ESWARA REDDY", Title="Region Based Integrated Approach for Image Retrieval" P1151616490, Author="Rafika Ben Salem and Karim Saheb Ettabaa and Mohamed Ali Hamdi", Title="Spectral-Spatial Classification of Hyperspectral Image based on Oversampling and Multi-Feature Kernels"
Berdasarkan data historis gempabumi di daerah Jawa Barat, sesar Cimandiri merupakan daerah seismik aktif yang telah menimbulkan beberapa kejadian gempabumi. Puslitbang BMKG melakukan penelitian sesar Cimandiri dengan menggunakan metode... more
Berdasarkan data historis gempabumi di daerah Jawa Barat, sesar Cimandiri merupakan daerah seismik aktif yang telah menimbulkan beberapa kejadian gempabumi. Puslitbang BMKG melakukan penelitian sesar Cimandiri dengan menggunakan metode gravitasi. Pada pengukuran ini tim survey telah melakukan pengukuran microgravity menggunakan alat ukur gravimeter CG-5 sebanyak 25 titik yang tersebar di sekitar Sukabumi dan Bandung. Pada peta residual SVD (Second Vertical Derifative) untuk mengetahui pola sesar dengan jelas juga diperoleh informasi bahwa telah terjadi gempabumi pada daerah dengan nilai SVD positif (0 s/d 35), SVD pada daerah penelitian dapat dijadikan sebagai acuan untuk pemetaan daerah sesar yang rawan bencana gempabumi. Berdasarkan hasil interpretasi SVD, terjadi pemisahan antara sesar Cimandiri dan sesar Lembang di daerah Cipatat. Interpretasi kuantitatif dalam penelitian ini menggunakan pemodelan inversi 3D anomali residual pada topografi Hasil inversi 3D berupa model distribusi densitas bawah permukaan yang menunjukkan identifikasi sesar pada kedalaman sekira ̴ 7000 m mengalami pola cekungan dengan distribusi densitas (ρ) = ̴ 0.0533 - 1.51 gr/cm3.
Resumo. Apesar da popularidade do SVD e PCA, há uma dificuldade comum a vários estudos aplicados em compreender e diferenciar tais mé-todos. Frequentemente, ambos são aplicados sem uma avaliação adequada sobre qual é o mais apropriado... more
Resumo. Apesar da popularidade do SVD e PCA, há uma dificuldade comum a vários estudos aplicados em compreender e diferenciar tais mé-todos. Frequentemente, ambos são aplicados sem uma avaliação adequada sobre qual é o mais apropriado para cada cenário. A fim de facilitar a escolha entre tais métodos em tarefas computacionais, realizamos uma discussão pragmática que correlaciona o sucesso da aplicação destes métodos com características do domínio de análise. Para tanto, propomos uma metodologia que, aplicada em três coleções reais relacionadas a tarefas distintas, permitiu-nos verificar que há diferenças na aplicação do SVD e do PCA e que uma escolha não elaborada pode ser nociva à tarefa realizada.
Medical image security provides challenges and opportunities, watermarking and encryption of medical images provides the necessary control over the flow of medical information. In this paper a dual security approach is employed .A medical... more
Medical image security provides challenges and opportunities, watermarking and encryption of medical images provides the necessary control over the flow of medical information. In this paper a dual security approach is employed .A medical image is considered as watermark and is watermarked inside a natural image. This approach is to wean way the potential attacker by disguising the medical image as a natural image. To further enhance the security the watermarked image is encrypted using encryption algorithms. In this paper a multi–objective optimization approach optimized using different metaheuristic approaches like Genetic Algorithm (GA), Differential Evolution ( DE) and Bacterial Foraging Optimization (BFOA) is proposed. Such optimization helps in preserving the structural integrity of the medical images, which is of utmost importance. The water marking is proposed to be implemented using both Lifted Wavelet Transforms (LWT) and Singular Value Decomposition (SVD) technique. The encryption is done using RSA and AES encryption algorithms. A Graphical User Interface (GUI) which enables the user to have ease of operation in loading the image, watermark it, encrypt it and also retrieve the original image whenever necessary is also designed and presented in this paper.
The paper proposes a novel robust watermarking technique based on newly introduced Nonsubsampled contourlet transform(NSCT) and singular value decomposition(SVD) for multimedia copyright protection. The NSCT can give the asymptotic... more
The paper proposes a novel robust watermarking technique based on newly introduced Nonsubsampled contourlet transform(NSCT) and singular value decomposition(SVD) for multimedia copyright protection. The NSCT can give the asymptotic optimal representation of the edges and contours in image by virtue of the characteristics of good multi resolution shift invariance and multi directionality. After decomposing the host image into sub bands, we choose the low frequency directional sub band and apply singular value decomposition. The singular values of the original image are then modified by the singular values of nonsubsampled contourlet transformed visual grayscale logo watermark image. This hybrid approach improves the performance of the watermarking technique compared to earlier techniques. Experimental results shows that the hybrid technique is resilient to various linear and non linear filtering ,JPEG compression, JPEG2000 compression, Histogram equalization, Grayscale inversion, Contrast adjustment, gamma correction, alpha mean ,cropping ,Gaussian noise, scaling etc.
In order to contribute to the security of sharing and transferring medical images, we had presented a multiple watermarking technique for multiple protections; it was based on the combination of three transformations: the discrete wavelet... more
In order to contribute to the security of sharing and transferring medical images, we had presented a multiple watermarking technique for multiple protections; it was based on the combination of three transformations: the discrete wavelet transform (DWT), the fast Walsh-Hadamard transform (FWHT) and, the singular value decomposition (SVD). In this paper, three watermark images of sizes 512x 512 were inserted into a single medical image of various modalities such as magnetic resonance imaging (MRI), computed tomography (CT), and X-Radiation (X-ray). After applying DWT up to the third level on the original image, the high-resolution sub-bands were being selected subsequently to apply FWHT and then SVD. The singular values of the three watermark images were inserted into the singular values of the cover medical image. The experimental results showed the effectiveness of the proposed method in terms of quality and robustness compared to other reported techniques cited in the literature.
Cloud computing is a well-known subject in IT system and is based on network and computer utilities. Cloud computing is the main source of storage facilities for small, medium and large companies in recent years. Since, many customers... more
Cloud computing is a well-known subject in IT
system and is based on network and computer utilities.
Cloud computing is the main source of storage
facilities for small, medium and large companies in
recent years. Since, many customers look for cloud
computing facilities and services, thus the security of
information must be guaranteed to give a full
confidence for that customer who has trusted to use
these facilities and ready to store their valuable
information on the cloud computing and distributed
networks. The fundamental problem in cloud
computing is the security of the stored data.[1]
This paper describes the process of storing the images
and scanned documents over the cloud by using our
proposed security model which is based on the
biometric features and multilevel encryption. On the
other hand, it will discuss about cloud computing
environment, issues and concerns regarding to
security, authentication by using biometric features,
and new security algorithms and models. The
Security algorithms, which are used in this paper, are
consisting of the scrambling algorithm and two level
encryption methods. In addition, we have proposed
two different scenarios that improve the DEPSKY
model.
Biometrics is the knowledge of setting the identity of
an individual related to the inherent physical or
attitude characteristics connected with the person. The
relationship between the perceived authentication
content and perceived content of biometric features is
studied. The process of developing the algorithms and
the model is documented in the proposed section.
A pilfered duplicate of a digital video would be easily disseminated to the global audience because of the rapid high-speed internet. Due to impeccably replicable nature of digital video, numerous unlawful duplicates of the original video... more
A pilfered duplicate of a digital video would be easily disseminated to the global audience because of the rapid high-speed internet. Due to impeccably replicable nature of digital video, numerous unlawful duplicates of the original video can be made. A video can undergo several intentional attacks like frame dropping, averaging, cropping and median filtering and unintentional attacks like the addition of noise and compression which can compromise copyright information, thereby denying the authentication. Hence techniques are needed to secure copyrights of the proprietor and counteract illegal copying. One of the techniques is Video Watermarking strategy for concealing some sort of information into digital video sequences that are orders of successive still frames. In this paper, we study properties of video watermarking, the arrangement of computerized video watermarking systems, watermark attacks, its applications, issues and challenges for video watermarking. At last, we propose some future research directions.
This paper focuses on no-reference image quality assessment(NR-IQA)metrics. In the literature, a wide range of algorithms are proposed to automatically estimate the perceived quality of visual data. However, most of them are not able to... more
This paper focuses on no-reference image quality assessment(NR-IQA)metrics. In the literature, a wide range of algorithms are proposed to automatically estimate the perceived quality of visual data. However, most of them are not able to effectively quantify the various degradations and artifacts that the image may undergo. Thus, merging of diverse metrics operating in different information domains is hoped to yield better performances, which is the main theme of the proposed work. In particular, the metric proposed in this paper is based on three well-known NR-IQA objective metrics that depend on natural scene statistical attributes from three different domains to extract a vector of image features. Then, Singular Value Decomposition (SVD) based dominant eigenvectors method is used to select the most relevant image quality attributes. These latter are used as input to Relevance Vector Machine (RVM) to derive the overall quality index. Validation experiments are divided into two groups; in the first group, learning process (training and test phases) is applied on one single image quality database whereas in the second group of simulations, training and test phases are separated on two distinct datasets. Obtained results demonstrate that the proposed metric performs very well in terms of correlation, monotonicity and accuracy in both the two scenarios.
In this paper we propose a semi-blind watermarking scheme using Discrete Wavelet Transform and Singular Value Decomposition for copyright protection. We used a gray scale image as a watermark to hide in another gray scale image as a cover... more
In this paper we propose a semi-blind watermarking scheme using Discrete Wavelet Transform and Singular Value Decomposition for copyright protection. We used a gray scale image as a watermark to hide in another gray scale image as a cover image. The cover image is modified (Zig-Zag) and divided to number of blocks of size n x n. We find the spatial frequency of each block and kept a threshold on this spatial frequency to form a reference image. Then the reference image is transformed into wavelet domain. We hide the watermark into reference image by modifying the singular values of reference image with the singular values of watermark. The proposed algorithm provides a good imperceptibility and robust for various attacks.
Secret key extraction is a crucial issue in physical layer security and a promising technique for the next generation of 5G and beyond. Unlike previous works on this topic in which Orthogonal Frequency Division Multiplexing (OFDM)... more
Secret key extraction is a crucial issue in physical layer security and a promising technique for the next generation of 5G and beyond. Unlike previous works on this topic in which Orthogonal Frequency Division Multiplexing (OFDM) sub-channels are considered to be independent, the effect of correlation among sub-channels on the secret key rate is addressed in this paper. We assume a realistic model for dependency among the sub-channels. Moreover, a new approach with low computational complexity for key extraction and optimal Mutual Information (MI) is presented in our study. To do this, we utilize a Singular Value Decomposition-based (SVD-based) precoding to obtain an optimal SVD-based approach for key extraction. Simulation results show that the rate of the key exchange link may drop up to 72% when the signal to noise ratio (SNR) is considered to be 8dB. The low computational complexity of our proposed approach makes it a promising candidate for developing secure and high-speed networks.
The combination texture feature extraction approach for texture image retrieval is proposed in this paper. Two kinds of low level texture features were combined in the approach. One of them was extracted from singular value decomposition... more
The combination texture feature extraction approach for texture image retrieval is proposed in this paper. Two kinds of low level texture features were combined in the approach. One of them was extracted from singular value decomposition (SVD) based dual-tree complex wavelet transform (DTCWT) coefficients, and the other one was extracted from multi-scale local binary patterns (LBPs). The fusion features of SVD based multi-directional wavelet features and multi-scale LBP features have short dimensions of feature vector. The comparing experiments are conducted on Brodatz and Vistex datasets. According to the experimental results, the proposed method has a relatively better performance in aspect of retrieval accuracy and time complexity upon the existing methods.
Despite the popularity of Matrix factorization methods (MF) in Recommender Systems (RSs), it is not clear the practical distinction among them in real scenarios. In this context, this work organizes existing methods, proposing a taxonomic... more
Despite the popularity of Matrix factorization methods (MF) in Recommender Systems (RSs), it is not clear the practical distinction among them in real scenarios. In this context, this work organizes existing methods, proposing a taxonomic organization for the area, and characterizes their performance, correlating the application success with domain characteristics. Aiming at the practical use of this research, we present a characterization framework for MF methods applied to RSs, useful for contrasting the performance of different methods. Analyses of actual collections have shown that recommendations generated via SVD tend to focus on non-popular items, while recommendations issued by PCA focuses on popular items.
This paper presents secure transmission for digital watermarking using four level stationary or redundant wavelet transform (SWT/RWT) and singular value decomposition (SVD). In the embedding process. For copyright protection and image... more
This paper presents secure transmission for digital watermarking using four level stationary or redundant wavelet transform (SWT/RWT) and singular value decomposition (SVD). In the embedding process. For copyright protection and image quality, we used 4-SWT-SVD for digital watermarking. In our method, SWT is applied to decompose the cover image and produce the low and high frequency coefficients to take benefits of “power compaction†goods. SVD is additionally performed to acquire the singular values and improve the strength of the algorithm. The experimental outcomes give good results up-to 49% as compared to previous results. Several results show that the presented method is able to withstand under various attacks. In this study, we determined the performance of this algorithm on the basis of Peak Signal Noise Ratio (PSNR) and Normalized Correlation (NC) value of various watermarked images.
This paper examines the implementation of the Singular Value Decomposition (SVD) method to detect the presence of wireless signal. The method is used to find the maximum and minimum eigenvalues. We simulated the algorithm using common... more
This paper examines the implementation of the Singular Value Decomposition (SVD) method to detect the presence of wireless signal. The method is used to find the maximum and minimum eigenvalues. We simulated the algorithm using common digital signal in wireless communication namely rectangular pulse shape, raised cosine and root-raised cosine to test the performance of the signal detector. The SVD-based signal detector was found to be more efficient in sensing signal without knowing the properties of the transmitted signal. The execution time is acceptable compared to the favorable energy detection. The computational complexity of SVD-based detector is medium compared to the energy detector. The algorithm is suitable for blind spectrum sensing where the properties of the signal to be detected are unknown. This is also the advantage of the algorithm since any signal would interfere and subsequently affect the quality of service (QoS) of the IEEE 802.22 connection. Furthermore, the algorithm performed better in the low signal-to-noise ratio (SNR) environment.
This dissertation deals with a class of nonlinear adjustment problems that has a direct least squares solutionfor certain weighting cases. In the literature of mathematical statistics these problems are expressed in anonlinear model... more
This dissertation deals with a class of nonlinear adjustment problems that has a direct least squares solutionfor certain weighting cases. In the literature of mathematical statistics these problems are expressed in anonlinear model called Errors-In-Variables (EIV) and their solution became popular as total least squares(TLS). The TLS solution is direct and involves the use of singular value decomposition (SVD), presented inmost cases for adjustment problems with equally weighted and uncorrelated measurements. Additionally,several weighted total least squares (WTLS) algorithms have been published in the last years for derivingiterative solutions, when more general weighting cases have to be taken into account and without linearizingthe problem in any step of the solution process.This research provides rstly a well de ned mathematical relationship between TLS and direct least squaressolutions. As a by-product, a systematic approach for the direct solution of these adjustments is established,using a consistent and complete mathematical formalization. By transforming the problem to the solutionof a quadratic or cubic algebraic equation, which is identical with those resulting from TLS, it will be shownthat TLS is an algorithmic approach already known to the geodetic community and not a new method.A second contribution of this work is the clear overview of weighted least squares solutions for the discussedclass of problems, i.e. the WTLS solution in the terminology of the statistical community. It will be shownthat for certain weighting cases a direct solution still exists, for which two new solution strategies will beproposed. Further, stochastic models with more general weight matrices are examined, including correlationsbetween the measurements or even singular cofactor matrices. New algorithms are developed and presented,that provide iterative weighted least squares solutions without linearizing the original nonlinear problem.The aim of this work is the popularization of the TLS approach, by presenting a complete framework forobtaining a (weighted) least squares solution for the investigated class of nonlinear adjustment problems.The proposed approaches and the implemented algorithms can be employed for obtaining direct solutionsin engineering tasks for which efficiency is important, while iterative solutions can be derived for stochasticmodels with more general weights.
The paper aims to present a two-dimensional land seismic processing, involving two flowcharts, ranging from the pre-processing to the stage of post-stack migration. The flowcharts differ mainly in the filtering stage, through by different... more
The paper aims to present a two-dimensional land seismic processing, involving two flowcharts, ranging from the pre-processing to the stage of post-stack migration. The flowcharts differ mainly in the filtering stage, through by different streams, wherein the first using a frequency filter (f-k filter) and the second with a spatial coherence filter (SVD filter).
This paper highlights an algorithm for detecting the presence of wireless signal using the Singular Value Decomposition (SVD) technique. We simulated the algorithm to detect common digital signals in wireless communication to test the... more
This paper highlights an algorithm for detecting the presence of wireless signal using the Singular Value Decomposition (SVD) technique. We simulated the algorithm to detect common digital signals in wireless communication to test the performance of the signal detector. The SVD-based signal detector was found to be more efficient in detecting a signal without knowing the properties of the transmitted signal. The performance of the algorithm is better compared to the favorable energy detection. The algorithm is suitable for blind spectrum sensing where the properties of the signal to be detected are unknown. This is also the advantage of the algorithm since any signal would interfere and subsequently affect the quality of service (QoS) of the IEEE 802.22 connection.
The ability of biogeochemical ecosystem models to represent agro-ecosystems depends on their correct integration with field observations. We report simultaneous calibration of 67 DayCent model parameters using multiple observation types... more
The ability of biogeochemical ecosystem models to represent agro-ecosystems depends on their correct integration with field observations. We report simultaneous calibration of 67 DayCent model parameters using multiple observation types through inverse modeling using the PEST parameter estimation software. Parameter estimation reduced the total sum of weighted squared residuals by 56% and improved model fit to crop productivity, soil carbon, volumetric soil water content, soil temperature, N2O, and soil NO3- compared to the default simulation. Inverse modeling substantially reduced predictive model error relative to the default model for all model predictions, except for soil NO3- and NH4+. Post-processing analyses provided insights into parameter–observation relationships based on parameter correlations, sensitivity and identifiability. Inverse modeling tools are shown to be a powerful way to systematize and accelerate the process of biogeochemical model interrogation, improving ou...
The purpose of this article is to determine the usefulness of the Graphics Processing Unit (GPU) calculations used to implement the Latent Semantic Indexing (LSI) reduction of the TERM-BY-DOCUMENT matrix. Considered reduction of the... more
The purpose of this article is to determine the usefulness of the Graphics Processing Unit (GPU) calculations used to implement the Latent Semantic Indexing (LSI) reduction of the TERM-BY-DOCUMENT matrix. Considered reduction of the matrix is based on the use of the SVD (Singular Value Decomposition) decomposition. A high computational complexity of the SVD decomposition-O(n 3), causes that a reduction of a large indexing structure is a difficult task. In this article there is a comparison of the time complexity and accuracy of the algorithms implemented for two different environments. The first environment is associated with the CPU and MATLAB R2011a. The second environment is related to graphics processors and the CULA library. The calculations were carried out on generally available benchmark matrices, which were combined to achieve the resulting matrix of high size. For both considered environments computations were performed for double and single precision data.