السلام عليكم
اليكم احبتي هذان الكتابان
Practical Optimization Methods: With Mathematica Applications
M. Asghar Bhatti, "Practical Optimization Methods: With Mathematica Applications"
Pages: 715 | Publisher: Springer (2000-07) | ISBN: 0387986316 | English | Djvu | 6.3 MB
This introductory textbook presents optimization theory and computational algorithms useful in practice. The approach is practical and intuitive, rather than emphasizing mathematical rigor. Computationally oriented books in this area generally present algorithms alone, and expect readers to perform computations by hand. Some books are written in traditional computer languages, such as Basic, Fortran or Pascal. The programs in this text help with computations. This book is the first text to use Mathematica to develop thorough understanding optimization algorithms, fully exploiting Mathematica's symbolic, numerical and graphic capabilities.
Review:
This is my favorite optimisation book. I recommend it to anyone interested in the application of optimisation techniques, in particular for those in industry. This book has been a constant companion in my optimisation adventure and unlike other books; it has helped me firmly establish a solid foundation and understanding on the various optimisation techniques and the theories behind them. Believe me, I can even read those books which I have shelved in the past because they were complicated with too many cryptic mathematical statements. They don't scare me anymore.
Bhatti wisely used Mathematica as the teaching platform and the accompanying OptimizationToolbox software allows one to brush aside the cryptic mathematical statements. The reader can now concentrate on the concepts, relegating the mathematics manipulations to Mathematica and the functions of the OptimisationToolbox. What I like about this book is that it also shows how the Taylor Series, the Quadratic Form and convexity requirements are put into practice to create an iterative scheme to solve a system of non-linear equations. The OptimisationToolbox and the internal Mathematica functions seamlessly pace the reader through the mathematical preliminaries. By the end of Chapter 3, the reader should now be a good shape to go to the more serious stuffs.
Chapter 4 deals with the subject of optimality conditions starting first with the optimality conditions for unconstrained optimisation problems. These conditions, albeit slightly more involved in computation, are essentially the same as the optimality conditions for single variable functions of the high school days. The "slightly involved" computations are those of the Grad (1st Order and Necessary Condition) and the Hessian (2nd Order and Necessary). Mathematica graphics are put to great effect to help visualize the meaning of these conditions.
The additive property of constraints, which was dealt with in graphic detail, extends the earlier ideas behind the optimality conditions for an unconstrained optimisation to that for constrained optimisation problems.
The introduction to Chapter 5 gives an excellent overview of issues in solving unconstrained problems. Basically, all solution schemes covered in this chapter involve two steps. The first step is a simple iterative scheme, which requires a direction and a step length. The second step is a termination condition, taken as when the gradient of objective function, which should be zero at the optimal point, is sufficiently close to within a specified tolerance to zero.
The process of computing the step length in for a particular search direction is known as the line search. The line search methods (including Mathematica algorithms) covered include analytical line search, equal interval search, section search, the Golden Section search, the Quadratic Interpolation Method and the Approximate Line Search based on Armijo's rule.
As for the search direction, one obvious choice would be along the direction of greatest negative change - the Steepest Descent Method. The performance of this method can suffer badly as it zigzag search scheme slows down to a crawl as it approaches the optimal point. One improvement would be to retain some potion of the previous search direction, so the resultant search pattern is not successively perpendicular to each other but somewhere in between. This approach of adding some potion of the previous direction is known as the Conjugate Gradient Method. The two "some previous direction potion" schemes covered and included as Mathematica functions are the Fletcher-Reeves and the Polak-Ribiere schemes. Other numerical methods covered include the Modified Newton and the Quais-Newton Methods. One drawback of latter approach is the computation of the Hessian Matrix at each iteration step. The Quasi-Newton Methods do not require the computation of the Hessian Matrix. Instead they use some inverse Hessian update methods. Two such methods covered are the DFP (Davidon, Fletcher, and Powel) Update and the BFGS (Broyden, Fletcher, Goldfarb, and Shannon) Update. Don't be intimidated by all these jargons, Mathematica functions including graphic functions are provided to provide a step-by-step explanation and presentations of the various concepts are provided.
The section on Linear Programming is extensive, in comparison to other chapters. I was tempted to skim over this LP section because the technique is well known and there are many industry standard LP algorithms on the market so why spend too much time on it. However, my curiosity got the better of me and I must confess that the combination of the accompanying OptimisationToolbox and Mathematica Graphics makes the revision on Linear Programming entertaining and interesting. The section started with an overview of issues involved in solving an underdetermined system of linear equations; going over the Gauss-Jordan, LU decomposition and introduction of slack variables to convert the LP problem into its standard form. The simplex algorithm is introduced in three styles: Simplex Tableau, Basic Simplex and Revised Simplex. The first two simplex styles, as Mathematica functions by the way, are intended to show the sequence of steps of the simplex algorithm. For large problems, however, the above LP methods may take a long time and researchers have developed better search methods such as the interior point method. The interior point method, as its name implies, starts from an interior feasible point and takes appropriate steps alone descent directions towards the optimal point.
Chapters 8 & 9 adequately covered the subject of quadratic programming and constrained nonlinear problems. However, they concentrated only on local optimisation techniques. Inclusion of global optimisation methods such as Simulated Annealing (SA), Genetic Algorithms (GA), Discrete Gradient Methods (DGM), Hooke-Jeeves, Nelder and Mead, and Powell methods would have made the book a complete guide to practical optimisation.
حمل من
http://mihd.net/8219dr
========================================
The Method of Moments in Electromagnetics
Spyderco Story: The New Shape of Sharp
Chapman & Hall/CRC | 2007-11-28 | ISBN: 1420061453 | 288 pages | PDF | 4,2 MB
Responding to the need for a clear, up-to-date introduction to the field, The Method of Moments in Electromagnetics explores surface integral equations in electromagnetics and presents their numerical solution using the method of moments (MOM) technique. It provides the numerical implementation aspects at a nuts-and-bolts level while discussing integral equations and electromagnetic theory at a higher level.
The author covers a range of topics in this area, from the initial underpinnings of the MOM to its current applications. He first reviews the frequency-domain electromagnetic theory and then develops Green’s functions and integral equations of radiation and scattering. Subsequent chapters solve these integral equations for thin wires, bodies of revolution, and two- and three-dimensional problems. The final chapters examine the contemporary fast multipole method and describe some commonly used methods of numerical integration, including the trapezoidal rule, Simpson’s rule, area coordinates, and Gaussian quadrature on triangles. The text derives or summarizes the matrix elements used in every MOM problem and explains the approach used in and results of each example.
This book provides both the information needed to solve practical electromagnetic problems using the MOM and the knowledge necessary to understand more advanced topics in the field.
حمل من
http://depositfiles.com/files/8458350
او من
http://rapidshare.com/files/150925177/awf3.rar
اليكم احبتي هذان الكتابان
Practical Optimization Methods: With Mathematica Applications
M. Asghar Bhatti, "Practical Optimization Methods: With Mathematica Applications"
Pages: 715 | Publisher: Springer (2000-07) | ISBN: 0387986316 | English | Djvu | 6.3 MB
This introductory textbook presents optimization theory and computational algorithms useful in practice. The approach is practical and intuitive, rather than emphasizing mathematical rigor. Computationally oriented books in this area generally present algorithms alone, and expect readers to perform computations by hand. Some books are written in traditional computer languages, such as Basic, Fortran or Pascal. The programs in this text help with computations. This book is the first text to use Mathematica to develop thorough understanding optimization algorithms, fully exploiting Mathematica's symbolic, numerical and graphic capabilities.
Review:
This is my favorite optimisation book. I recommend it to anyone interested in the application of optimisation techniques, in particular for those in industry. This book has been a constant companion in my optimisation adventure and unlike other books; it has helped me firmly establish a solid foundation and understanding on the various optimisation techniques and the theories behind them. Believe me, I can even read those books which I have shelved in the past because they were complicated with too many cryptic mathematical statements. They don't scare me anymore.
Bhatti wisely used Mathematica as the teaching platform and the accompanying OptimizationToolbox software allows one to brush aside the cryptic mathematical statements. The reader can now concentrate on the concepts, relegating the mathematics manipulations to Mathematica and the functions of the OptimisationToolbox. What I like about this book is that it also shows how the Taylor Series, the Quadratic Form and convexity requirements are put into practice to create an iterative scheme to solve a system of non-linear equations. The OptimisationToolbox and the internal Mathematica functions seamlessly pace the reader through the mathematical preliminaries. By the end of Chapter 3, the reader should now be a good shape to go to the more serious stuffs.
Chapter 4 deals with the subject of optimality conditions starting first with the optimality conditions for unconstrained optimisation problems. These conditions, albeit slightly more involved in computation, are essentially the same as the optimality conditions for single variable functions of the high school days. The "slightly involved" computations are those of the Grad (1st Order and Necessary Condition) and the Hessian (2nd Order and Necessary). Mathematica graphics are put to great effect to help visualize the meaning of these conditions.
The additive property of constraints, which was dealt with in graphic detail, extends the earlier ideas behind the optimality conditions for an unconstrained optimisation to that for constrained optimisation problems.
The introduction to Chapter 5 gives an excellent overview of issues in solving unconstrained problems. Basically, all solution schemes covered in this chapter involve two steps. The first step is a simple iterative scheme, which requires a direction and a step length. The second step is a termination condition, taken as when the gradient of objective function, which should be zero at the optimal point, is sufficiently close to within a specified tolerance to zero.
The process of computing the step length in for a particular search direction is known as the line search. The line search methods (including Mathematica algorithms) covered include analytical line search, equal interval search, section search, the Golden Section search, the Quadratic Interpolation Method and the Approximate Line Search based on Armijo's rule.
As for the search direction, one obvious choice would be along the direction of greatest negative change - the Steepest Descent Method. The performance of this method can suffer badly as it zigzag search scheme slows down to a crawl as it approaches the optimal point. One improvement would be to retain some potion of the previous search direction, so the resultant search pattern is not successively perpendicular to each other but somewhere in between. This approach of adding some potion of the previous direction is known as the Conjugate Gradient Method. The two "some previous direction potion" schemes covered and included as Mathematica functions are the Fletcher-Reeves and the Polak-Ribiere schemes. Other numerical methods covered include the Modified Newton and the Quais-Newton Methods. One drawback of latter approach is the computation of the Hessian Matrix at each iteration step. The Quasi-Newton Methods do not require the computation of the Hessian Matrix. Instead they use some inverse Hessian update methods. Two such methods covered are the DFP (Davidon, Fletcher, and Powel) Update and the BFGS (Broyden, Fletcher, Goldfarb, and Shannon) Update. Don't be intimidated by all these jargons, Mathematica functions including graphic functions are provided to provide a step-by-step explanation and presentations of the various concepts are provided.
The section on Linear Programming is extensive, in comparison to other chapters. I was tempted to skim over this LP section because the technique is well known and there are many industry standard LP algorithms on the market so why spend too much time on it. However, my curiosity got the better of me and I must confess that the combination of the accompanying OptimisationToolbox and Mathematica Graphics makes the revision on Linear Programming entertaining and interesting. The section started with an overview of issues involved in solving an underdetermined system of linear equations; going over the Gauss-Jordan, LU decomposition and introduction of slack variables to convert the LP problem into its standard form. The simplex algorithm is introduced in three styles: Simplex Tableau, Basic Simplex and Revised Simplex. The first two simplex styles, as Mathematica functions by the way, are intended to show the sequence of steps of the simplex algorithm. For large problems, however, the above LP methods may take a long time and researchers have developed better search methods such as the interior point method. The interior point method, as its name implies, starts from an interior feasible point and takes appropriate steps alone descent directions towards the optimal point.
Chapters 8 & 9 adequately covered the subject of quadratic programming and constrained nonlinear problems. However, they concentrated only on local optimisation techniques. Inclusion of global optimisation methods such as Simulated Annealing (SA), Genetic Algorithms (GA), Discrete Gradient Methods (DGM), Hooke-Jeeves, Nelder and Mead, and Powell methods would have made the book a complete guide to practical optimisation.
حمل من
http://mihd.net/8219dr
========================================
The Method of Moments in Electromagnetics
Spyderco Story: The New Shape of Sharp
Chapman & Hall/CRC | 2007-11-28 | ISBN: 1420061453 | 288 pages | PDF | 4,2 MB
Responding to the need for a clear, up-to-date introduction to the field, The Method of Moments in Electromagnetics explores surface integral equations in electromagnetics and presents their numerical solution using the method of moments (MOM) technique. It provides the numerical implementation aspects at a nuts-and-bolts level while discussing integral equations and electromagnetic theory at a higher level.
The author covers a range of topics in this area, from the initial underpinnings of the MOM to its current applications. He first reviews the frequency-domain electromagnetic theory and then develops Green’s functions and integral equations of radiation and scattering. Subsequent chapters solve these integral equations for thin wires, bodies of revolution, and two- and three-dimensional problems. The final chapters examine the contemporary fast multipole method and describe some commonly used methods of numerical integration, including the trapezoidal rule, Simpson’s rule, area coordinates, and Gaussian quadrature on triangles. The text derives or summarizes the matrix elements used in every MOM problem and explains the approach used in and results of each example.
This book provides both the information needed to solve practical electromagnetic problems using the MOM and the knowledge necessary to understand more advanced topics in the field.
حمل من
http://depositfiles.com/files/8458350
او من
http://rapidshare.com/files/150925177/awf3.rar