Best linear programming books according to redditors

We found 27 Reddit comments discussing the best linear programming books. We ranked the 20 resulting products by number of redditors who mentioned them. Here are the top 20.

Next page

Top Reddit comments about Linear Programming:

u/nekochanwich · 44 pointsr/vegan

As of today, these books sell for:

u/kohatsootsich · 11 pointsr/math

The Gagliargo-Nirenberg inequalities you mention originate here.

Some of Nirenberg's "greatest hits" at a glance: some of his early work concerned the Minkowski problem of finding surfaces with prescribed Gauss curvature, and the related Weyl problem of finding isometric embeddings of positive curvature metrics on the sphere. For a gentle introduction to this type of problem accessible (in principle) after a basic course in differential geometry and some analysis, see these notes by Khazdan. For a more advanced treatment, including a discussion of the Minkowski problem and generalizations see these notes by Guan. This line of research owes a lot to Nirenberg.

In this legendary paper (2700+ citations, for a math paper!) and another with the same co-authors (Agmon and Douglis), he investigated boundary Schauder and L^p estimates for solutions of general linear elliptic equations. You can look at Gilbarg-Trudinger, or Krylov's books (1, 2) for the basics of linear elliptic equations, including boundary estimates. Here is a course by Viaclovsky in case you don't want to buy the books. This last set is far more basic stuff than Agmon-Douglis-Nirenberg, though, but it should give you an idea of what its about.

Another extremely famous contribution of Nirenberg is his introduction with Kohn of the (Kohn-Nirenberg) calculus of pseudodifferential operators. Shortly thereafter, Hoermander began his monumental study of the subject, later summarized in his books I, II, III, IV. If you know nothing about pseudo-differential operators, I suggest starting with this book by Alinhac and Gérard.

Another gigantic result is the Newlander-Nirenberg theorem on integrability of almost complex structures. An almost-complex structure is a structure on the tangent space of a manifold which mimics the effect that rotation by i has on the tangent vectors. The Newlander-Nirenberg tells you that if a certain simple necessary condition holds, you can actually choose locally holomorphic coordinates for the manifold compatible which induce this a.c. structure. A proof that should be reasonably accessible, provided you understand what I just wrote and have some basic notions of several complex variables can be found here.

Nirenberg also studied the important problem of (local) solvability of (pseudo)-differential equations with Francois Treves. In this paper, he introduced the famous condition Psi, which was only recently proved by Dencker to be necessary and sufficient for local solvability. An exposition of the problem at a basic level can be found in this undergrad thesis from UW.

Another massively influential paper was this one, with Fritz John, where he introduces the space of BMO functions, and proved the Nirenberg-John lemma to the effect that any BMO function is exponentially integrable. Fefferman later identified BMO as the dual of the Hardy space Re H_1, and the BMO class plays a crucial role in the Calderon-Zygmund theory of singular integral operators. You can read about this in any decent book on harmonic analysis. I myself like Duoandicoetxea's Fourier Analysis. BMO functions are treated in chapter 6. For a more "old school" treatment using complex analysis, including a proof of Fefferman's theorem, check out Koosis' lovely Introduction to H^p spaces.

Another noted contribution was his "abstract Cauchy-Kowalevski" theorem, where he formulated the classical theorem in terms of an iteration in a scale of spaces, instead of the more direct treatment based on power series. This point of view has now become classical. Look at the proof in Treve's book Basic Linear Partial Differential Equations.

Next, his landmark paper with Gidas and Ni (2000+ citations) on symmetry of positive solutions to second order nonlinear elliptic PDE are absolute classics. The technique is now a basic part of the "elliptic toolbox".

His series of papers with Caffarelli, Spruck and Kohn (starting here) on fully nonlinear equations is also classic, and the basis for much of the later work. It's gotten sustained attention in part because optimal transport equations are of (real) Monge-Ampere type.

The theorem about partial regularity of NS you are referring to is this absolute classic with Cafarelli and Kohn. A simple recent proof, together with an accessible exposition of de Georgi's method, can be found here.

Let me finish by mentioning my personal favorite, one of the most cited papers in analysis of the 20th century, an absolute landmark of variational analysis, Brezis-Nirenberg 1983. A pedagogical exposition appears in Chapter III of Struwe's excellent book.

TLDR: Nirenberg is one of the most important analysts of the past 60 years.

edit: Thanks for the gold! Glad this was useful/interesting to someone, given how advanced and specialized the material is.

u/zifyoip · 8 pointsr/mathbooks

Linear programming:

u/clarinetist001 · 7 pointsr/statistics

Wasserman's book is definitely the standard for (IMO) a "CliffNotes" version of mathematical statistics at the intro-Ph.D. level.

If you're looking for something like GLMs, GAMs, etc., you're going to want a book on linear models. I've heard good things about Agresti's new intro text but haven't read it. I have read Monahan's text and Christensen's text, but these are -very- technical and can't be absorbed on a quick read. I think one of the texts by Kutner might have what you're looking for, but I'm not sure (don't have a copy myself).

You may want to check out Shalizi's text.

u/mathwanker · 5 pointsr/math

These were the most enlightening for me on their subjects:

u/trngoon · 5 pointsr/statistics

Disregard this suggestion. Those books are about supervised machine learning for the most part. With some unsupervised data mining stuff also. Not even slightly what you should look at and would not be taught in a stats class. If it says '____learning' it is probably machine learning.

Here are some actual stats books that are very good:

https://www.wiley.com/en-us/Understanding+and+Applying+Basic+Statistical+Methods+Using+R-p-9781119061397

https://www.amazon.com/Linear-Models-Chapman-Statistical-Science/dp/1439887330

u/berf · 4 pointsr/math

I took Terry Rockafellar's special topics course based on the book in 1990 (8 years before the book appeared) and have used material in the book in several research papers.

One way to think of this book is that it is convex analysis (Rockafellar's 1970 classic) with convexity dropped. Another way to think of it is that it is nonsmooth analysis (Frank Clarke's book) updated.

I never figured out why the "variational" (some analogy with calculus of variations, I think), but it is a masterly treatment of optimization theory.

One of the main tools is epiconvergence, the correct notion of convergence of optimization problems.

The book is indeed a lot to digest, but that is because there is a lot there. I think it is one of the great math books of the twentieth century. Rockafellar (1970) is another.

It is not related to functional analysis because, like Rockafellar (1970), it stays with finite dimensional. There is a reason. Epiconvergence can be defined for nonsmooth functions on infinite-dimensional spaces, but doesn't have anywhere near as nice properties. It need not even be topological. But for lower semicontinuous functions on finite-dimensional spaces the topology of epiconvergence is metrizable and compact. Other books (Attouch, 1984) do deal with the infinite-dimensional case (somewhat).

One interesting aspect of the book is its complete analysis of the generalization of subdifferentiation in convex analysis. It turns out that, in general, it splits into two concepts (or four concepts if one considers the dual notions of subgradients and subdifferentials), which merge into one in the so-called "regular" case (which includes convexity and, more generally, Lipschitz) but we also see this same phenomenon when we are not considering functions but just sets (two kinds of tangent vectors and two kinds of normal vectors) that also collapse in the "regular" case, which includes convexity.

So what do we have?

  • notions of convergence for set-valued and nonsmooth functions and for sets that are the correct notions for optimization theory.

  • analogs of differentiability for nonsmooth functions and set-valued functions.

  • analogs of first and second order conditions for optimality for nonsmooth functions (analogs of gradient equal to zero and hessian positive definite for local minimum)

    And there's more.

    There is no prerequisite other than calculus. The whole theory is developed from the beginning. You need to know some measure theory for the last chapter. Baby Rudin (Principles of Mathematical Analysis) would be enough for that.

    Other books that cover similar topics are either more advanced or not as complete.

  • Attouch (1984), apparently out of print.

  • Clarke, (1987), apparently also out of print.

  • Aubin and Frankowska (1990)

  • Mordukhovich (2006), two volumes, the Amazon link is just to volume 1.

    But the Rockafellar and Wets book is both easier to read and more complete (except for avoiding infinte dimensions) than the others. I have to admit that I haven't really read the Mordukhovich and haven't even bought volume 2 yet.

    EDIT: Forgot the "map" question. This is the "calculus" of nonsmooth functions. But only differentiation theory, since classical integration theory handles nonsmooth functions with no problem. So it goes right next to calculus on the "map". Note that this theory was undiscovered until 1964 when Wijsman invented epiconvergence. So this is really new math! Some bits and pieces date earlier (Kuhn-Tucker conditions, tangent cones, Painlevé-Kuratowski set convergence, the analog of epiconvergence for sets), but the subject didn't really get rolling until the 1980's.
u/abeliangrape · 4 pointsr/programming

Seriously, what the fuck is this shit? Does the author think this is the best way to showcase traveling salesman or genetic algorithms? Because if he was trying to showcase genetic algorithms there are much better problems to choose. If he's trying to show methods to solve TSP, doing simple 2-OPT or 3-OPT after initializing a decent tour would yield better results than this. It looks like nearest neighbor might yield better results than his eventual result. If he was trying to showcase stochastic heuristics with TSP, surely simulated annealing would be a better choice. No, instead, he jumps right into code with very little justification for boneheadedly applying this particular methodology to one of the most extensively studied problems in all of CS.

EDIT: I'm instead going to suggest Chapter 7 of Combinatorial Optimization by Cook, Cunningham, Pulleyblank, Schrijver for a good survey of the techniques mentioned above and much more.

u/timshoaf · 4 pointsr/math
u/G-Brain · 4 pointsr/math

The only book I know that covers the calculus of variations is Modern Geometry - Methods and Applications, Part I by Dubrovin et al. The material on the calculus of variations starts at chapter 5, and depends on the preceding chapters. Take a look, maybe you'll find it useful.

u/TheMiamiWhale · 3 pointsr/MachineLearning
  1. Not sure what exactly the context is here but usually it is the space from which the inputs are drawn. For example, if your inputs are d dimensional, the input space may be R^d or a subspace of R^d

  2. The curse of dimensionality is important because for many machine learning algorithms we use the idea of looking at nearby data points for a given point to infer information about the respective point. With the curse of dimensionality we see that our data becomes more sparse as we increase the dimension, making it harder to find nearby data points.

  3. The size of the neighbor hood depends on the function. A function that is growing very quickly may require a smaller, tighter neighborhood than a function that has less dramatic fluctuations.

    If you are interested enough in machine learning that you are going to work through ESL, you may benefit from reading up on some math first. For example:

u/douggums · 2 pointsr/mathbooks

Let me dig this thread out of the grave.

This is the book I used in a Linear Operations Research class I took earlier this year. It starts with using modeling language to optimize linear programs, then dives into the linear algebra beneath.

u/monkeyunited · 2 pointsr/datascience

Linear Models with R by Faraway and following that Extending the Linear Models for generalized linear model. Faraway has a R library ("Faraway") with datasets that he uses in his textbook.

These two books are what we used at UCLA for graduate level stats.

u/drdough · 1 pointr/math

Sure, there are a few directions you could go:

Algorithms: A basic understanding of how to think about and analyze algorithms is pretty necessary if you were to go into combinatorial optimization and is a generally useful topic to know in general. CLRS is the most famous introductory book on algorithms, and it gets the job done. It's long, but I thought it was decent enough. There are also plenty of video lectures on algorithms online; I liked the MIT OpenCourseWare of this class.

Graph Theory: Many combinatorial optimization problems involve graphs, so you would definitely want to know some graph theory. It's also super interesting, and definitely worth learning regardless! West is a good book with lots of exercises. Bondy and Murty and Diestel also have good books, which are freely available in PDF if you do a google search. Since you're doing a project on traffic optimization, you might find network flows interesting. Networks are directed graphs, where you think about moving "flow" across the edges of the graph, so they are useful for modelling a lot of real-life problems, including traffic. Ahuja is the best book I know on network flows.

Linear and Integer Programming: Many optimization problems can be described as maximizing (or minimizing) some linear function subject to a set of linear constraints. These are linear programs (LPs). If the variables need to take on integer values, then you have an integer program (IP). Most combinatorial optimization problems can be formulated as integer programs. Integer programming is NP-hard, but in practice there are methods that can solve most IPs , even very large ones, relatively quickly. So, if you actually want to optimize things in real-life this is a very useful thing to know. There's also a mathematically rich field of developing methods to solve IPs. It's a bit of a different flavor than the rest of this stuff, but it's definitely a fertile area of research. Bertsimas is good for learning linear programming. Unfortunately, I don't have a good recommendation for learning integer programming from scratch. Perhaps the chapters in Papadimitriou - Combinatorial Optimization would be a good introduction.

Approximation Algorithms: This is about algorithms which quickly (in polynomial time) find provably good but not necessarily optimal solutions to NP-hard problems. Williamson and Shmoys have a great book that is freely available here.

The last book I'd recommend is Schrijver. This is the bible for the field. I put it here at the end because it's more of a reference book rather than something you could read cover to cover, but it's REALLY good.

Lastly, if you like traffic optimization, maybe look up what people are doing in operations research departments. A lot of OR is about modelling real problems with math and analyzing the models, so this would include things like traffic optimization, vehicle routing problems, designing smart electric grids, financial engineering, etc.

Edit: Not sure why my links aren't all formatting correctly... sorry!

u/Kresley · 1 pointr/suggestmeabook

OK, well, that's a pretty wide range, and completely different types of books and authors to read, but, I can take a crack at about a quarter of it.

Consilience by E.O. Wilson to start, I suppose, for interdisciplinary linkages.

The early (and by that, I mean 60s-70s) ecology textbooks were actually far more devoted to modeling systems than they are today. The neat color graphics and technology we currently use to do so are not going to be there, but the underlying concepts they were obsessed with then are actually being used by other disciplines, currently.

And, you need to start reading more micro- and macro- economics basics. They don't realize it, but they use the same models (and wind up drawing completely different conclusions than the environmental scientists. Ha!). Any intro textbook (they're basically all the same) and then the intermediate textbooks if you've done the intros courses, in that field will do. Get one that's one edition earlier than the current, it's cheaper and they revise them so often, it doesn't matter. The basics aren't going to change; unless you're looking into international economics/trade issues, then stick with the most current.

Find Tufte's The Visual Display of Quantitative Information, at some point. It's not directly about it, but it will help enormously when you inevitably have to write some papers or proposals about it. You'll see why when you get your hands on it. It's the most cross-disciplinary book ever.

I would consider this nearly a must to work your way though: Introduction to Linear Optimization. Normally, I'd not recommend that kind of 'light reading' (/s), but since you're a math major, you should be able to pull it off. It's hard to predict where within this you're going to end up, but that one, at least, is going to be a huge tool in working through things more quickly than your peers, if you can master it. Applicable to almost every discipline.

Anything in engineering that covers "control systems" is going to end up talking about this, and having to explain the models first, before it launches into strategies, whether it's mechanical, electrical, environmental, etc. And risk management specific books. Not in their beginnings, that's just all basic probability stuff, but the back half will get interesting in that direction. Actually, since most engineering risk management books are written more generically to accommodate most of the sub-disciplines of engineering, those might be the best to seek out if the ones in mechanical/electrical control systems are requiring too much subdiscipline basics knowledge. Just the other day I was looking through one that was about factory line design specifics I thought would be applicable to so many other processes and fields than just what it was covering, including what business majors would call "logistics".

Tons of computer science books cover this from their own take on what that term means, and none I can remember off the top of my head. Here's hoping we'll have a more programmer-type person stop by and recommend specifics in those.

u/MadPat · 1 pointr/math

I don't have any books I would recommend on the m<n topic but if you want to deal with non-singular matrices, you need to pick up a book on the General Linear Model. I have been told Christiansen is good. I have also heard that Wichura is good but I don't think he does matrix theory very much. He is more interested in abstract vector spaces.

u/kafkaesque_garuda · 1 pointr/optimization

Hi OP,

I found myself in a similar situation to you. To add a bit of context, I wanted to learn optimization for the sake of application to DSP/machine learning and related domains in ECE. However, I also wanted sufficient intuition and awareness to understand and appreciate optimization it for it's own sake. Further, I wanted to know how to numerically implement methods in real-time (embedded platforms) to solve the formulated problems (Since my job involves firmware development). I am assuming from your question that you are interested in some practical implementation/simulations too.

​

< A SAMPLE PIPELINE >

Optimization problem formulation -> Enumerating solution methods to formulated problem -> Algorithm development (on MATLAB for instance) -> Numerical analysis and fixed-point modelling -> Software implementation -> Optimized software implementation.

​

So, building from my coursework during my Masters (Involving the standard LinAlg, S&P, Optimization, Statistical Signal Processing, Pattern Recognition, <some> Real Analysis and Numerical methods), I mapped out a curriculum for myself to achieve the goals I explained in paragraph 1. The Optimization/Numerical sections of the same is as below:

​

OPTIMIZATION MODELS:

  1. Optimization Models by Calafiore and El Ghaoui (Excellent and thorough reference book)
  2. Non-linear Programming by D.Bertsakas ( I agree that nonlinear programming is very relevant and will be very useful in the future too)

  1. Convex Optimization by S. Boyd and Vandenberghe (Another very good book for basics)

  1. Numerical Linear Algebra by L.N.Trefethen and D.Bau III (Very good explanation of concepts and algorithms and you might be able to find a free ebook version online)
  2. Numerical Optimization by Jorge Nocedal and S.Wright (Both authors are very accomplished and the textbook is well regraded as a sound introduction to this subject)
  3. Numerical Algorithms by Justin Solomon (He's a very good teacher whose presentation is digestible immediately)

  • His Lectures are here: https://www.youtube.com/playlist?list=PLHrg69yaUAPeiLEsa-1KauSe2HaA0Wf6I

    ​

    Personally I think this might be a good starting point, and as other posters have mentioned, you will need to tailor it to your use-case. Remember that learning is always iterative and you can re-discover/go deeper once you've finished a first pass. Front-loading all the knowledge at once usually is impractical.

    ​

    All the best and hope this helped!
u/darf · 1 pointr/math

Read the book 100 Digit Algorithms. It helps to have interesting problems in the context of using the numerical methods, and it also helps that the authors of the book are very good writers of mathematics. The solution to the first problem really seemed like magic to me the first time I saw it, and I had taken a class in complex analysis (sorry if this is a spoiler).

u/monghai · 1 pointr/math

This will give you some solid theory on ODEs (less so on PDEs), and a bunch of great methods of solving both ODEs and PDEs. I work a lot with differential equations and this is one of my principal reference books.

This is an amazing book, but it mostly covers ODEs sadly. Both the style and the material covered are great. It might not be exactly what you're looking for, but it's a great read nonetheless.

This covers PDEs from a very basic level. It assumes no previous knowledge of PDEs, explains some of the theory, and then goes into a bunch of elementary methods of solving the equations. It's a small book and a fairly easy read. It also has a lot of examples and exercises.

This is THE book on PDEs. It assumes quite a bit of knowledge about them though, so if you're not feeling too confident, I suggest you start with the previous link. It's something great to have around either way, just for reference.

Hope this helped, and good luck with your postgrad!