LP and MIP models are widely used. Many problems can be formulated as or approximated by linear
optimization models. High-performance commercial solvers, dramatic technological advances in computer hardware and parallel computing facilities allow us to solve ever-larger models. Models with millions of variables and equations are no longer the exception. However, designing and building the mathematical models that underpin the optimization applications, remains somewhat of an art. Good modeling skills and experience can make the difference between a successful project and a failure.
When we need to model nonlinear behavior we can use nonlinear programming (NLP) solvers. Nonlinear models are used in certain problem areas where linear technology is just not sufficient. Large-scale, sparse NLP solvers help us tackling those problems.
More and more data is becoming available in larger quantities. To make sense of this data, we can use statistical techniques. Modern data analysis tools help us to visualize, analyze, summarize, test hypotheses and predict. In much of our data analysis work, the open source R environment plays an important role.
Relational databases and spreadsheets are still the most prevalent vehicles to hold and process data. Almost all optimization projects have some sort of database or spreadsheet interface. Spreadsheets are also extremely useful to disseminate results.
Users of optimization models expect more than dry tables with results. Complex dashboards and sophisticated interactive visualizations are nowadays standard in optimization projects.