Computers have become an important part of scientific research and technical development. They are essential both for collecting and analyzing experimental data and for solving complicated mathematical problems arising from theoretical considerations. Due to modern computing environments many scientists and engineers do not any programming in compiled languages like C or Fortran at all, but if they do they should do it in efficient way.
Surely scientific programming as any other one needs interaction with operating system, user interface, etc... - dynamical languages like Matlab, Python, Tcl/Tk are good for that. I want to emphasize that I am talking here about core computational part, e.g. like library callable from dynamical language.
It is hard to define exactly what is Scientific and Technical Efficient Programming (STEP) but two main features are clear.
These characteristics make standard programming and debugging techniques described in the most of programming books and courses useless. STEP demands scientific knowledge, the proficiency in numerical analysis, computer architecture and programming languages. All attempts to screen the programmer from "gory details" are bound to failure, unless problem is very simple.
In my opinion only two of modern languages namely C/C++ and FORTRAN/Fortran are fit for STEP. (I see C++ as improved C and Fortran as improved FORTRAN, no Objects, Classes, and all that.) None of them is better than other and both have their strong and weak points. One can write good or bad (or ugly) program in any language. From the performance point of view nothing can beat assembler.
All function that affects performance is written in assembler and C code is just used for wrapper of assembler functions or complicated functions. Also I use many inline assembler functions, unfortunately most of commercial compiler can't handle inline assembler.
Kazushige Goto,GotoBLAS2 FAQ
C is more flexible and closer to hardware but some important constructions are absent, mainly multidimensional arrays (though it has more flexible but less efficient one - array of arrays). Working in industry I used C and C++ for numerical calculations and even rewrote some FORTRAN codes. However I recognized that most of my code deals with very rigid structures which simply reflect standard FORTRAN ones. Actually it is not surprising since most of numerical algorithms were desingned with FORTRAN in mind.
One the other hand most of FORTRAN deficiencies were corrected in Fortran. It is modern programming language, with dynamic memory allocation, standard C interoperability and a lot of other good things.
Every few years new generation of programmers declare new programming paradigm, the right way! I am not old enough to remember the programming before Fortran era, but I can recall some computer scientists talking about ugly messy Fortran and elegant Algol. For example strict format of Fortran code was compared to free style of Algol: it is ugly to always start at 7 column and put only one operator in line, programming language should be similar to natural language in this regard, white spaces should not matter. Now Python gives meaning to white spaces again.
This part is actually a set of commented quotes from the Bjarne Stroustrup's book: "The C++ Programming Language (Fourth Edition)". Note that he talks about "numerical computation" not "numerical programming" and this make a huge difference. In many cases "numerical computation" needs no "numerical programming" standard libraries do all hard work.
C++ wasn't specifically designed with numerical computation in mind. However, much numerical, scientific, and engineering computation is done in C++. A major reason for this is that traditional numerical work must often be combined with graphics and with computations relying on data structures that don't fit into the traditional Fortran mold (p. 31)."Yes, that's right. "Traditional numerical work" is done by C or FORTRAN libraries ( e.g. lapack). BTW he should say FORTRAN, because Fortran is another language rougly corrsponding to FORTRAN as C++ to C.
C++ wasn't designed primarily with numerical computation in mind. However, C++ is heavily used for numerical computation and the standard library reflects that (p. 128).No rational explanation.
The vector described in 4.4.1 was designed to be a general mechanism for holding values, to be flexible, and to fit into the architecture of containers, iterators, and algorithms. However, it does not support mathematical vector operations. Adding such operations to vector would be easy, but its generality and flexibility precludes optimizations that are often considered essential for serious numerical work. Consequently, the standard library provides a vector -like template, called valarray, that is less general and more amenable to optimization for numerical computation. The usual arithmetic operations and the most common mathematical functions are supported for valarrays. In particular, valarray offers stride access to help implement multidimensional computations (p. 131).Most modern books suggest using vector for matrix operations, they even do not mention valarray. (Which is not that great also.)
Don't try to do serious numeric computation using only the language; use libraries; (p. 132)Really this advice means: don't do numerical programming in C++.
The rules for linkage of templates are the rules for linkage of the generated classes and functions (15.2, 15.2.3). This implies that if the layout of a class template or the definition of an inline function template changes, all code that uses that class or function must be recompiled. For templates defined in header files and included "everywhere" this can imply a lot of recompilation because templates tend to include a lot of information in header files, more than non-template code using .cpp files. In particular, if dynamically linked libraries are used, care has to be taken that all uses of a template are consistently defined. Sometimes, it is possible to minimize the exposure to changes in complicated template libraries by encapsulating their use in functions with non-template interfaces. For example, I might like to implement some computations using a general numerical library supporting a wide variety of types (e.g., Chapter 29, 40.4, 40.5, 40.6). However, I often know the type used for my calculations. For example, in a program I may consistently use doubles and vector < double >. In that case, I could define:This long quote expains exactly why C++ is not suitable for numerical programming: a lot of unnecessary work both from programmer and from compiler!double accum(const vector <double> & v) { return accumulate(v.begin(),v.end(),0.0); }Given that, I can use the simple non-templated declaration of accum() in my code: double accum(const vector<double>& v); The dependence on std::accumulate has disappeared into a .cpp file that is not seen by the rest of my code. Also, I suffer the compilation-time overhead of a #include<numeric>only in that .cpp file. Note that I took the opportunity to simplify the interface to accum() compared to std::accumulate. The generality that is a key attribute of good template libraries can be seen as a source of complexity in a particular application. I suspect that I would not use this technique for standard-library templates. Those are stable over years and known to the implementations. In particular, I did not bother to try to encapsulate vector <double>. However, for more complex, esoteric, or frequently changing template libraries, such encapsulation can be useful (p. 697)
vallarray <T> A numerical vector with vector operations, but with restrictions to encourage high-performance implementations;use only if you do a lot of vector arithmetic (p. 888).Poor cousin of FORTRAN array.
Much numeric work relies on relatively simple single-dimensional vectors of floating-point values. In particular, such vectors are well supported by high-performance machine architectures, libraries relying on such vectors are in wide use, and very aggressive optimization of code using such vectors is considered essential in many fields. The valarray from <valarray> is a single-dimensional numerical array. The fundamental idea of valarray was to provide Fortran-like facilities for dense multidimensional arrays with Fortran-like opportunities for optimization. This can only be achieved with the active support of compiler and optimization suppliers and the addition of more library support on top of the very basic facilities provided by valarray. So far, that has not happened for all implementations (p. 1166).So use FORTRAN/Fortran for numerical programming!
All of these containers can be seen as providing specialized services needed by large communities of programmers. No single container could serve all of these needs because some needs are contradictory, for example, "ability to grow" vs. "guaranteed to be allocated in a fixed location", and "elements do not move when elements are added" vs. "contiguously allocated." Furthermore, a very general container would imply overhead deemed unacceptable for individual containers(p. 974).Yes. Generality kills efficiency and wise versa. The most important here is "communities of programmers", C++ is for programmers who do not know and do not care about algorithm construction and application of results.
C++ was not designed primarily with numeric computation in mind. However, numeric computation typically occurs in the context of other work - such as database access, networking, instrument control, graphics, simulation, and financial analysis - so C++ becomes an attractive vehicle for computations that are part of a larger system (p. 1159).Especially if numerical part is not written in C++.
Furthermore, numeric methods have come a long way from being simple loops over vectors of floating-point numbers. Where more complex data structures are needed as part of a computation, C++'s strengths become relevant(p. 1160).Plain C is better, when FORTRAN is too rigid.
The net effect is that C++ is widely used for scientific, engineering, financial, and other computation involving sophisticated numerics. Consequently, facilities and techniques supporting such computation have emerged(p. 1160).Yes, and here is the problem.