Sharing Teaching Ideas: Bisymmetric Matrices: Some Elementary New Problems

1989 ◽  
Vol 82 (8) ◽  
pp. 622-623
Author(s):  
Samuel Councilman

In introductory linear algebra courses one continually seeks interesting sets of matrices that are closed under the operations of matrix addition, scalar multiplication, and if possible, matrix multiplication. Most texts mention symmetric and antisymmetric matrices and ask the reader to show that these sets are closed under matrix addition and scalar multiplication but fail to be closed under matrix multiplication. Few textbooks, if any, suggest an investigation of the set of matrices that are symmetric with respect to both diagonals, namely bisymmetric matrices. The following is a sequence of relatively straightforward problems that can be used as homework, class discussion, or even examination material in elementary linear algebra classes.

Author(s):  
Thitarie Rungratgasame ◽  
Pattharapham Amornpornthum ◽  
Phuwanat Boonmee ◽  
Busrun Cheko ◽  
Nattaphon Fuangfung

The definition of a regular magic square motivates us to introduce the new special magic squares, which are reflective magic squares, corner magic squares, and skew-regular magic squares. Combining the concepts of magic squares and linear algebra, we consider a magic square as a matrix and find the dimensions of the vector spaces of these magic squares under the standard addition and scalar multiplication of matrices by using the rank-nullity theorem.


Author(s):  
A. Myasishchev ◽  
S. Lienkov ◽  
V. Dzhulii ◽  
I. Muliar

Research goals and objectives: the purpose of the article is to study the feasibility of graphics processors using in solving linear equations systems and calculating matrix multiplication as compared with conventional multi-core processors. The peculiarities of the MAGMA and CUBLAS libraries use for various graphics processors are considered. A performance comparison is made between the Tesla C2075 and GeForce GTX 480 GPUs and a six-core AMD processor. Subject of research: the software is developed basing on the MAGMA and CUBLAS libraries for the purpose of the NVIDIA Tesla C2075 and GeForce GTX 480 GPUs performance study for linear equation systems solving and matrix multiplication calculating. Research methods used: libraries were used to parallelize the linear algebra problems solution. For GPUs, these are MAGMA and CUBLAS, for multi-core processors, the ScaLAPACK and ATLAS libraries. To study the operational speed there are used methods and algorithms of computational procedures parallelization similar to these libraries. A software module has been developed for linear equations systems solving and matrix multiplication calculating by parallel systems. Results of the research: it has been determined that for double-precision numbers the GPU GeForce GTX 480 and the GPU Tesla C2075 performance is approximately 3.5 and 6.3 times higher than that of the AMD CPU. And the GPU GeForce GTX 480 performance is 1.3 times higher than the GPU Tesla C2075 performance for single precision numbers. To achieve maximum performance of the NVIDIA CUDA GPU, you need to use the MAGMA or CUBLAS libraries, which accelerate the calculations by about 6.4 times as compared to the traditional programming method. It has been determined that in equations systems solving on a 6-core CPU, it is possible to achieve a maximum acceleration of 3.24 times as compared to calculations on the 1st core using the ScaLAPACK and ATLAS libraries instead of 6-fold theoretical acceleration. Therefore, it is impossible to efficiently use processors with a large number of cores with considered libraries. It is demonstrated that the advantage of the GPU over the CPU increases with the number of equations.


2019 ◽  
Vol 13 (4) ◽  
pp. 286-290
Author(s):  
Siraphob Theeracheep ◽  
Jaruloj Chongstitvatana

Matrix multiplication is an essential part of many applications, such as linear algebra, image processing and machine learning. One platform used in such applications is TensorFlow, which is a machine learning library whose structure is based on dataflow programming paradigm. In this work, a method for multiplication of medium-density matrices on multicore CPUs using TensorFlow platform is proposed. This method, called tbt_matmul, utilizes TensorFlow built-in methods tf.matmul and tf.sparse_matmul. By partitioning each input matrix into four smaller sub-matrices, called tiles, and applying an appropriate multiplication method to each pair depending on their density, the proposed method outperforms the built-in methods for matrices of medium density and matrices of significantly uneven distribution of non-zeros.


Author(s):  
Peter M. Higgins

Matrices represent the central algebraic vehicle for advanced computation throughout mathematics as well as the physical and social sciences. ‘Introduction to matrices’ explains that matrices are simply rectangular arrays of numbers. There are some natural, simple operations that can be performed on matrices. Scalar multiplication is where all entries in a matrix are multiplied by a fixed number. Network theory is one of the major applications of linear algebra, which is the branch of the subject that is largely represented by matrices and matrix calculations. Another application of matrices is to the geometry of transformations.


Acta Numerica ◽  
2014 ◽  
Vol 23 ◽  
pp. 1-155 ◽  
Author(s):  
G. Ballard ◽  
E. Carson ◽  
J. Demmel ◽  
M. Hoemmen ◽  
N. Knight ◽  
...  

The traditional metric for the efficiency of a numerical algorithm has been the number of arithmetic operations it performs. Technological trends have long been reducing the time to perform an arithmetic operation, so it is no longer the bottleneck in many algorithms; rather, communication, or moving data, is the bottleneck. This motivates us to seek algorithms that move as little data as possible, either between levels of a memory hierarchy or between parallel processors over a network. In this paper we summarize recent progress in three aspects of this problem. First we describe lower bounds on communication. Some of these generalize known lower bounds for dense classical (O(n3)) matrix multiplication to all direct methods of linear algebra, to sequential and parallel algorithms, and to dense and sparse matrices. We also present lower bounds for Strassen-like algorithms, and for iterative methods, in particular Krylov subspace methods applied to sparse matrices. Second, we compare these lower bounds to widely used versions of these algorithms, and note that these widely used algorithms usually communicate asymptotically more than is necessary. Third, we identify or invent new algorithms for most linear algebra problems that do attain these lower bounds, and demonstrate large speed-ups in theory and practice.


Acta Numerica ◽  
2017 ◽  
Vol 26 ◽  
pp. 95-135 ◽  
Author(s):  
Ravindran Kannan ◽  
Santosh Vempala

This survey provides an introduction to the use of randomization in the design of fast algorithms for numerical linear algebra. These algorithms typically examine only a subset of the input to solve basic problems approximately, including matrix multiplication, regression and low-rank approximation. The survey describes the key ideas and gives complete proofs of the main results in the field. A central unifying idea is sampling the columns (or rows) of a matrix according to their squared lengths.


2017 ◽  
Author(s):  
Siddharth Samsi ◽  
Brian Helfer ◽  
Jeremy Kepner ◽  
Albert Reuther ◽  
Darrell O. Ricke

AbstractAnalysis of DNA samples is an important tool in forensics, and the speed of analysis can impact investigations. Comparison of DNA sequences is based on the analysis of short tandem repeats (STRs), which are short DNA sequences of 2-5 base pairs. Current forensics approaches use 20 STR loci for analysis. The use of single nucleotide polymorphisms (SNPs) has utility for analysis of complex DNA mixtures. The use of tens of thousands of SNPs loci for analysis poses significant computational challenges because the forensic analysis scales by the product of the loci count and number of DNA samples to be analyzed. In this paper, we discuss the implementation of a DNA sequence comparison algorithm by re-casting the algorithm in terms of linear algebra primitives. By developing an overloaded matrix multiplication approach to DNA comparisons, we can leverage advances in GPU hardware and algoithms for dense matrix multiplication (DGEMM) to speed up DNA sample comparisons. We show that it is possible to compare 2048 unknown DNA samples with 20 million known samples in under 6 seconds using a NVIDIA K80 GPU.


2021 ◽  
Author(s):  
Domenico Brunetto ◽  
Ana Moura Santos

This work presents a set of student-centred activities that may help undergraduate students understand mathematics in their first year of a STEAM degree. In particular, the authors refer to the difficulties students meet in making connections between syntactic and semantic dimensions in learning mathematics, especially in Linear Algebra topics. The specific goal of this paper is to present and discuss how it can work in the case of linear transformations. This topic stands in the middle of every Linear Algebra standard course and is pivotal in many recent applications, such as computer graphics. The study describes the teaching-learning experience and reports the results of the first pilot study, which involves about 100 undergraduate Architecture students of Politecnico di Milano. One of the peculiarities of this work is its context since the class is composed of heterogeneous group of students, in terms of knowledge background and attitudes towards mathematics. The main findings of this paper are underlining how a student-centred strategy, based on asynchronous activities and synchronous class discussion, allows misconceptions to emerge and be appropriately addressed


Author(s):  
LIU Xian-bei ◽  

To implement the ideological and political teaching concept of curriculum is a key measure to establish morality and cultivate talents in the new era, and also a basic requirement of colleges and universities' original mission of "educating talents for the Party and the country". University mathematics, as a basic subject universally offered in colleges and universities, has advantages in the course of ideological and political teaching, but also has obvious shortcomings. Linear algebra is a compulsory basic course for science and engineering majors, aiming at cultivating students' logical training and abstract thinking ability. This article mainly to the "Linear Algebra" teaching as an example, the first to study the Linear Algebra course ideological education and the advantages of the ideological education into the difficulty in the course, and put forward the linear algebra course ideological education the implementation of the specific methods: update teaching ideas, strengthen the teacher training, teaching methods and build a rich curriculum ideological system of the implementation of the path.


Sign in / Sign up

Export Citation Format

Share Document