ORBEL 24

About the conference
What is OR
Organisation
Deadlines
Call for papers
Registration
   Registration
   Participants
Conference program
   Key speakers
   Speakers
   Detailed program
Location
Conference dinner
Partners
Contact

 

Detailed schedule

Click on a link for more details
Show all the abstracts
Thursday 28, 2010
8:30-9:00Registration - Welcome coffee
9:00-9:30Welcoming session - Room 030
9:30-10:30Plenary session: P. Baptiste
Sustainable Development: How can we help

Room 030
10:30-11:00Coffee break
11:00-12:40Parallel sessions
  Timetabling in education and sport
Chair: G. Vanden Berghe
Room: 126
Transportation management
Chair: F. Semet
Room: 130
Networks
Chair: B. Fortz
Room: 138
Nonconvex optimization 1
Chair: F. Bach
Room: 035
12:40-14:00Lunch (and board meeting)
14:00-15:40Parallel sessions
  Constraint programming models 1
Chair: Y. Deville
Room: 126
Vehicle routing
Chair: S. Limbourg
Room: 130
Combinatorial optimization and IP applications
Chair: Q. Louveaux
Room: 138
Nonconvex Optimization 2
Chair: R. Sepulchre
Room: 035
15:40-16:10Coffee break
16:10-17:50Parallel sessions
  Constraint programming models 2
Chair: P. Schaus
Room: 126
Performance modeling
Chair: G. Janssens
Room: 130
Scheduling
Chair: K. Sorensen
Room: 138
Planning under uncertainty
Chair: R. Leus
Room: 035
17:50-General Assembly (Room 138)
18:45-Conference dinner

Friday 29, 2010
9:00-10:40Parallel sessions
  Metaheuristics
Chair: J. Teghem
Room: 126
Production and distribution (9:25)
Chair: Y. Arda
Room: 130
Multiple criteria
Chair: R. Bisdorff
Room: 138
Stochastic models (9:25)
Chair: L. Esch
Room: 035
10:40-11:00Coffee break
11:00-12:40Parallel sessions
  Constraint programming and Supply Chain Management
Chair: Y. Deville
Room: 126
OR in health management
Chair: P. De Causmaecker
Room: 130
Rankings and importance indices
Chair: JL. Marichal
Room: 138
Queueing
Chair: S. Wittevrongel
Room: 035
12:40-14:00Lunch
14:00-15:00Plenary session: M. Goemans
The Power of Matroids

Room 030
15:10-16:00Parallel sessions
  Optimization software
Chair: E. Loute
Room: 126
Integrated operations planning
Chair: B. Raa
Room: 130
Cycles in graphs
Chair: F. Spieksma
Room: 138
 
16:00-16:35Plenary session: ORBEL award and closing session
Room 030
16:35-...Coffee break
Show all the abstracts
Thursday 11:00:00 Timetabling in education and sport
Room 126 - Chair: G. Vanden Berghe
  • Educational timetabling, an overview of a mature research domain
    Peter Demeester (KaHo Sint-Lieven)
    Co-authors: G. Vanden Berghe, P. De Causmaecker
    Abstract:
    According to Anthony Wren is `timetabling the allocation, subject to constraints, of given resources to objects being placed in space-time, in such a way as to satisfy as nearly as possible a set of desirable objectives'. In most cases the resources are either scarce or expensive (or a combination of both) and can only be assigned once per a time unit. Looking at educational timetabling in particular, two large domains can be distinguished: course and examination timetabling. Course timetabling is the assignment of students to rooms and timeslots. In its turn, course timetabling can further be divided into two categories. In the first one, students take curricula. Individual students are not assigned to room-time slot combinations, but actually the curricula are. Curricula should be scheduled in such a way that no student should attend more than one lecture at the same time. Particularly on the European continent, curriculum based course timetabling is standard practice. In the Anglo-American educational system, however, post enrolment based course timetabling is more dominant. Students select several courses from a list, which leads to rather individual timetables. In this case, it is appropriate that each student attend the lectures for each selected course. Apart from course timetabling, we also consider examination timetabling. Actually, although not very common, the same distinction as in course timetabling can be made: post enrolment and curriculum based examination timetabling. In both cases, exams need to be assigned to room-time slot combinations. Examination and course timetabling show significant differences. In course timetabling, a student prefers a tight schedule, with, in the ideal case, no gaps between the lectures; in examination timetabling on the other hand, students appreciate some study time between two consecutive exams. Both problem categories require several days or weeks for generating a solution. In our presentation we give a literature overview on university timetabling and focus in particular on the multitude of problem defintions and the different approaches that were taken to come to a solution.
  • Educational course timetabling: a case study
    Herman Crauwels (campus De Nayer, Hogeschool voor Wetenschap & Kunst)
    Abstract:
    A course timetabling problem in a university environment is illustrated by summing up the different hard and soft constraints. To construct a feasible timetable, a semi-manual approach is described.
  • A hyper-heuristics approach to solve a real-world and a benchmark examination timetabling problem
    Peter Demeester (KaHo Sint-Lieven)
    Abstract:
    In this abstract we tackle two examination timetabling problems, which we both solve with the same hyper-heuristics approach. In contrast to meta-heuristics, in which the search is executed on the space of solutions, hyper-heuristics operate on a search space of heuristics. The idea behind hyper-heuristics, is that the selection of low-level heuristics is automated, for example by applying machine learning techniques. In general, low-level heuristics can be built so that each of them can individually solve one specific part of the problem. Hyper-heuristics can exploit the particular properties of each low-level heuristics by combining them in a specific order. A typical hyper-heuristics framework consists of some heuristic selection mechanism and a set of low-level heuristics. These low-level heuristics can be either perturbative (changing only small parts of the solution) or constructive (constructing a solution). In our case, the heuristic selection mechanism is `simple random', meaning that a low-level heuristic from a list of perturbative heuristics is randomly selected. As acceptance criteria we experiment with four meta-heuristics: simulated annealing, great deluge, steepest descent, and late acceptance strategy. The hyper-heuristics framework is first applied to a real-world examination timetabling problem at the School of Engineering of KaHo Sint-Lieven. In contrast to the solution obtained by the manual planner, we could reduce the number of weekly time slots from twelve to ten, satisfying all hard and soft constraints. In order to compare the hyper-heuristic's performance with the state-of-the-art, we also applied it to the data sets of the examination timetabling track of the 2007 International Timetabling Competition (ITC 2007). The goal of the examination track of the ITC 2007 was to provide more realistic data sets that could be used as test beds for algorithms. The hard and soft constraints of the ITC 2007 examination timetabling track are quite similar to those of KaHo Sint-Lieven, except that at KaHo there is a clean distinction between oral and written exams. The ITC 2007 problem definition on the other hand incorporates some constraints that are not present in the KaHo Sint-Lieven problem. For example, it demands that large exams should be scheduled in the beginning of the examination period, and that some of the time slots and rooms should preferably be avoided. Because of these extra constraints, we introduced additional low-level heuristics, which are particularly constructed to tackle these constraints. We obtain with this general approach, which was originally developed for tackling a real-world problem, results that are competitive with those obtained during the competition.
  • The carry-over effect does not exist in football
    Dries Goossens (Katholieke Universiteit Leuven)
    Co-authors: F.C.R. Spieksma
    Abstract:
    In a round robin tournament, it is often believed that each team has an effect on its opponent which carries over to the next game. Indeed, if team C plays against team A, and subsequently against team B, C's performance against B can be affected by A, and we say that team B receives a carry-over effect from A. For instance, if team A is a very strong team, then team C could be exhausted and discouraged after this game, which could benefit its next opponent, team B. Clearly, any schedule will lead to carry-over effects. In this work, we measure whether carry-over effects have an influence on the outcome of football matches. We do this for the highest division in Belgian football, and find that there is no evidence to support the claim that carry-over effects affect the results.

Thursday 11:00:00 Transportation management
Room 130 - Chair: F. Semet
  • Incorporating logistics decisions in activity-based freight modeling
    Tabitha Maes (Hasselt University)
    Co-authors: Katrien Ramaekers, An Caris, Tom Bellemans, Gerrit Janssens
    Abstract:
    Recent trends in transport models have moved away from the traditional four steps models to activity-based models. One important aspect of an activity-based freight model is to take logistics choices, such as the shipment size, into consideration. In the past, most transport models focused on modeling passenger flows, for which they use discrete choice models to decide on the modal split. For freight modeling these models have to be adapted to take into account the logistics decisions.
  • A GRASP metaheuristic for allocating resources to improve the accessibility in a road network after a natural disaster
    Pablo Andres Maya Duque (University of Antwerp )
    Co-authors: K. Sorensen, P. Goos
    Abstract:
    We consider the problem of allocating resources for repairing a rural road network after it has been damaged by a natural disaster. We propose a solution approach based on GRASP and VNS metaheuristics that aims for maximizing the accessibility of as many people as possible to the main cities or regional center where the economic and social infrastructure is usually located. The efficiency of our approach is demonstrated both in a set of small-medium size instances and in a large real-case motivated instance. Results point out the importance of OR techniques to support the decision process into the disaster management operations, particularly during the recovery phase.
  • Vehicle loading optimization with stochastic supply
    Thierry Pironet (Université de Liège)
    Co-authors: Amand, Arda, Crama, Kronus, Pironet
    Abstract:
    The increased availability of information makes it possible to coordinate processes which are usually functionnally separated in large companies, such as production and transportation. This work investigates the optimization of vehicle loading for individual orders over a multiperiod horizon when items have stochastic release dates from production, and time windows are imposed for delivery at the customer plant. The loading decisions are made in order to minimize the expected cost. Starting from the deterministic model, we develop scenario-based models for the stochastic version of the problem and we investigate the performance of various solution methods.
  • Location and market area of rail-road terminals
    Sabine Limbourg (HEC-ULg)
    Co-authors: B. Jourquin
    Abstract:
    The European transport policy has focused on sustainable transport solutions. One of its objectives for freight transport is to restore the balance between modes and to develop intermodality. Among the various types of intermodal transports, this research is concerned with rail-road container terminals embedded in a hub-and–spoke network. These terminals will further be referred to as hubs. Hub-and-spoke networks have been implemented in a number of transportation systems when it is favourable to consolidate and disseminate flows at certain locations called hubs. The efficiency of such a network depends on the location of the hubs. The problem is to find the optimal hub locations and to allocate the remaining nodes to these hubs. This problem is known as the p-hub median problem (p-HMP) where p is the number of hubs to locate. This location-allocation problem is proved to be NP-hard. The time needed to solve it increases as the number of nodes exponent three. Thus, in order to model rail-road transport on the trans-European networks, a subset of nodes that can be considered as good potential locations is needed. We applied the p-HMP to a set of potential locations obtained by both spatial aggregation of demand nodes using hierarchical clustering methods and by a flow-based approach which takes the flows of commodities and their geographic spread into account. They showed that the latest method gives better results and that is why it is retained to determine a set of potential locations. The set of potential locations is used as input for an iterative procedure. One of the main contributions of this research is to propose this iterative procedure based on both the p-HMP and the multi-modal assignment problem. Moreover, the objective function of our p-hub median formulation includes the costs for pre- and post-haulages by road, trans-shipment (according to the number of handled containers into account) and rail haulage. Furthermore, in the p-hub median problem, the total demand is assigned to the hubs. In this research however, the demand can be assigned over all the transportation modes, with the possibility (but not the obligation) of using the trans-shipment facilities. Finally, we presents a methodology able to compare road and rail-road intermodal market areas that takes the network structures, the operation costs and the location of the rail-road terminals into account. This methodology is applied to the optimal configurations obtained by the resolution of the p-HMP and the p-hub centre problem (p-HCP) for the whole trans-European network. Indeed, p-HMP has an efficiency goal by minimizing the total transportation cost. The hub network design obtained by this method can sometimes lead to unsatisfactory results when worst-case origin-destination pairs are separated by a very large distance. Therefore, the p-HCM meets the equity objective by minimizing the maximum cost of a combined transport.

Thursday 11:00:00 Networks
Room 138 - Chair: B. Fortz
  • Transmission Expansion Planning with Re-design
    Michael Poss (Université Libre de Bruxelles)
    Co-authors: L.S. Moulin, C. Sagastizabal
    Abstract:
    Expanding an electrical transmission network requires heavy investments that need to be carefully planned, often at a regional or national level. We study relevant theoretical and practical aspects of transmission expansion planning, set as a bilinear programming problem with mixed 0-1 variables. We show that the problem is NP-hard and that, unlike the so-called Network Design Problem, a transmission network may become more efficient after cutting-off some of its circuits. For this reason, we introduce a new model that, rather than just adding capacity to the existing network, also allows for the network to be re-designed when it is expanded. We then turn into different reformulations of the problem, that replace the bilinear constraints by using a ``big-M'' approach. We show that computing the minimal values for the ``big-M'' coefficients involves finding the shortest and longest paths between two buses. We assess our theoretical results by making a thorough computational study on real electrical networks. The comparison of various models and reformulations shows that our new model, allowing for re-design, can lead to sensible cost reductions.
  • Markov Chains to solve the Hamiltonian Cycle Problem
    Giang Nguyen (Universite Libre de Bruxelles)
    Abstract:
    Given a graph, the objective of the Hamiltonian Cycle Problem (HCP) is to find a cycle that passes through every single vertex exactly once, or determine that this cannot be achieved. Such a cycle is called a Hamiltonian cycle. The HCP is a special case of the better known NP-complete Traveling salesman problem, both of which are computationally difficult to solve. We characterize Hamiltonian cycles of a given graph as global maximizers or minimizers of certain functionals over the space of feasible stochastic matrices. Hence, optimal solutions of ensuing optimization problems determine the existence (or lack) of a Hamiltonian cycle in a given graph. A key result in the preceding class states that the determinant of a certain rank 1 corrected generator matrix is maximized at a subgraph that induces the longest possible cycle in a graph. Consequently, a good estimate of the maximal value of that determinant is sufficient to answer the question about the presence of Hamiltonian cycles in that graph. We also analyse the sensitivity and robustness of each of the above functionals under symmetric and asymmetric linear perturbations.
  • Independent sets and the routing and wavelength assignment problem
    Olivier Hudry (Ecole nationale supérieure des télécommunications)
    Co-authors: Lucile Belgacem, Irène Charon
    Abstract:
    We consider the Routing and Wavelength Assignment (RWA) problem in Wavelength Division Multiplexing optical networks. For a given network G and a given number W of wavelengths, RWA consists in routing in G a maximum number of Scheduled Lightpaths Demands (predictable connection requests) within the W available wavelengths. The routing of an SLD s consists in setting up a path between the source and destination nodes of s and a given number of wavelengths during a given time window. To solve this problem, we design a method based on the search of independent sets in conflict graphs.
  • Design of Reliable Networks
    Megha Sharma (Indian Institute of Management Ahmedabad)
    Co-authors: D. Gosh
    Abstract:
    Reliable networks are those that take the failure prone nature of real life networks into account. The problems of evaluating the performance of such networks and of designing such networks are NP-hard. We consider the reliable network design problem which includes reliable network evaluation as a subproblem. We develop multi-objective genetic algorithms and particle swarm optimization algorithms for this problem which simultaneously optimize the network setup cost, and maximum s-t flow based performance measure(s).

Thursday 11:00:00 Nonconvex optimization 1
Room 035 - Chair: F. Bach
  • On the best low multilinear rank approximation of higher-order tensors
    Mariya Ishteva (Université catholique de Louvain, Department of Mathematical Engineering)
    Co-authors: PA. Absil, S. Van Huffel, L. De Lathauwer
    Abstract:
    Higher-order tensors can be thought of higher-dimensional tables of numbers. Many fields, such as higher-order statistics and biomedical signal processing exploit such multi-way arrays. We focus on the best low multilinear rank approximation, used for signal subspace estimation and dimensionality reduction. Standard optimization algorithms face a difficulty caused by an invariance property of the cost function. We suggest three geometric algorithms, based on Newton's method, trust-region scheme and conjugate gradients. We also discuss the issue of local minima.
  • Regression on fixed-rank positive semidefinite matrices: a geometric approach
    Gilles Meyer (University of Liège)
    Co-authors: Gilles Meyer, Silvère Bonnabel and Rodolphe Sepulchre
    Abstract:
    In this paper, we adopt a geometric viewpoint to tackle the problem of learning a regression model whose parameter is a fixed-rank positive semidefinite (PSD) matrix. An important instance of that problem is the learning of a distance function parameterized by a fixed-rank PSD matrix. This task is a central issue for many machine learning applications where a data-specific distance has to be constructed, or where an existing distance needs to be improved based on additional side information. Learning low-rank matrices is a typical solution to reduce the computational cost of subsequent algorithms. Indeed, the complexity generally decreases from O(d^3) to O(d*r^2) where the approximation rank r is generally much smaller than the problem size d. Whereas efficient convex formulations exist in the full-rank case, the problem is no longer convex as soon as the rank constraint is introduced. Nevertheless, the set of rank-r PSD matrices has a rich Riemannian geometry that can be exploited for algorithmic purposes. We discuss the choice of two particular geometries of fixed-rank PSD matrices and we derive the corresponding gradient descent algorithms. In contrast to previous contributions in the literature, the range space of the matrix is free to evolve during the optimization and the resulting algorithms enjoy important invariance properties. We apply the two proposed algorithms to the distance learning problem. The good performance of the algorithms is illustrated on several well-known classification and clustering benchmarks.
  • Generalized Power Method for Sparse Principal Component Analysis
    Rodolphe Sepulchre (Université de Liège)
    Co-authors: Michel Journée, Peter Richtarik, Yurii Nesterov
    Abstract:
    In this paper we develop a new approach to sparse principal component analysis (sparse PCA). We propose two single-unit and two block optimization formulations of the sparse PCA problem, aimed at extracting a single sparse dominant principal component of a data matrix, or more components at once, respectively. While the initial formulations involve nonconvex functions, and are therefore computationally intractable, we rewrite them into the form of an optimization program involving maximization of a convex function on a compact set. The dimension of the search space is decreased enormously if the data matrix has many more columns (variables) than rows. We then propose and analyze a simple gradient method suited for the task. It appears that our algorithm has best convergence properties in the case when either the objective function or the feasible set are strongly convex, which is the case with our single-unit formulations and can be enforced in the block case. Finally, we demonstrate numerically on a set of random and gene expression test problems that our approach outperforms existing algorithms both in quality of the obtained solution and in computational speed.
  • A pooling approach for the feed mixing problem
    Jannes Verstichel (KaHo Sint-Lieven)
    Co-authors: G. Vanden Berghe, H. Callens
    Abstract:
    Food production for pets and all kinds of livestock involves a large scale optimization problem with mass balance constraints, nutritional constraints and physical mixing constraints. The problem consists of mixing certain raw materials into recipes at the lowest possible cost. In the case where all the recipes can be stored on site in separate silos, the problem translates into a standard linear problem that can be efficiently solved by any linear solver. When the number of available silos is smaller than the required number of final recipes, the problem becomes much harder to solve. In this case intermediate mixtures, which can be stored separately in the available silos are introduced. These intermediates are made by mixing raw materials in such a way that all the required recipes can be produced using these intermediates. Due to the introduction of these intermediates many of the balance, nutritional and physical mixing constraints are no longer linear. In addition the objective function is no longer linear either. In the petrochemical industry, a similar problem is known as the Pooling Problem. That is a nonlinear problem, which often has nonconvexities in both constraints and objective function. The pooling problem aims at maximizing the profit of selling gasoline blends by subtracting the production cost of each blend from the selling price. Many solution methods have been applied to this pooling problem. We will tackle the problem using both the ALT heuristic from Audet et al. (2004) and the nonlinear IPOPT solver in combination with the mathematical modeling tool AMPL.

Thursday 14:00:00 Constraint programming models 1
Room 126 - Chair: Y. Deville
  • Constraint-based Very Large-Scale Neighborhoods
    Sébastien Mouthuy (Université Catholique de Louvain)
    Co-authors: Y.Deville and P.Van Hentenryck
    Abstract:
    VLSN are neighborhoods with a very large number of candidates allowing to escape from local minima when standard local search get trapped. VLSN are successfully applied to a lot of problems but their search algorithms are dedicated implementations specifically designed to solve a given problem. Designing VLSN approaches on new problems is thus time-consuming. Our objective is to design and efficiently implement generic VLSN search procedures that can be used in a constraint-based framework.
  • Implementation in COMET of a Traffic Engineering technique that Preserves IP Fast Reroute
    Trong Viet Ho (UCL, Belgium)
    Co-authors: Yves Deville, Olivier Bonaventure
    Abstract:
    With Intradomain Routing Protocols, by optimizing the configuration (Trafic Engineering -TE) of link weights, operators can significantly improve the load of their links, which is known to be an NP-hard problem. Another emerging issue for ISP networks is the fast restoration of IP services in the case of failures. Recently, Fast Reroute techniques have been proposed for pure IP networks. The Loop-Free-Alternates (LFA) is one of the most promising technique. An LFA for a router A, for a destination d, is a neighbor C of A which does not use A to reach d, so that A can prepare itself to directly reroute the traffic destined to d along its link with C upon the failure of the link that A is using to reach d. Our objective is to provide a TE technique that takes the coverage of LFAs into account during the network resource usage optimization. Constraint Programming and Constraint-Based Local Search are well suited for solving such complex combinatorial problems. Especially in COMET, some classical problems can be modeled in only about a dozen lines of code. So, we have chosen a local search algorithm implemented in COMET for solving our traffic engineering problem.
  • Constraint-Based Graph Matching
    Vianney Le Clément (UCLouvain)
    Co-authors: Yves Deville, Christine Solnon
    Abstract:
    Measuring graph similarity is a key issue in many applications. We propose a new constraint-based modeling language for defining graph similarity measures by means of constraints. It covers measures based on univalent matchings, such that each node is matched with at most one node, as well as multivalent matchings, such that a node may be matched with a set of nodes. This language is designed on top of Comet, a programming language supporting both Constraint Programming (CP) and Constraint-Based Local Search (CBLS). Starting from the constraint-based description of the measure, we automatically generate a Comet program for computing the measure. Depending on the measure characteristics, this program either uses CP --which is better suited for computing exact measures such as (sub)graph isomorphism-- or CBLS --which is better suited for computing error-tolerant measures such as graph edit distances. First experimental results show the feasibility of our approach.
  • DynaUniversity2009
    Kevin Ghislain (ULB)
    Co-authors: B. Fortz, S. Zampelli, P. Schaus, A. Zanarini
    Abstract:
    DynaUniversity2009 : Exams Timetabling

Thursday 14:00:00 Vehicle routing
Room 130 - Chair: S. Limbourg
  • Integrating empty container allocation with vehicle routing decisions
    Kris Braekers (Universiteit Hasselt)
    Co-authors: Gerrit K. Janssens and An Caris
    Abstract:
    The problem of managing empty containers at a regional level involves efficiently repositioning empty containers between importers, exporters, intermodal terminals, inland container depots and ports. In the past the problem has been divided into two subproblems: an empty container allocation problem and a vehicle routing problem. This paper shows the advantage of integrating allocation and routing decisions into a single model. Model formulations and numerical experiments are discussed.
  • Searching for reliable routes in case of customer demand dependence
    Patrick Schittekat (University of Antwerp)
    Co-authors: Frederik Michieks
    Abstract:
    Effective logistic operations are nowadays extremely important for almost every company to improve their bottom line. An important part of the quality of fixed routes is determined by how well they cope with varying customer demand. In some industries (gasoline,grain, retail, etc.), there exists a high degree of dependence between the demand of each individual customer. In the stochastic CVRP literature, this kind of relationship between customer demands is generally neglected.
  • Effective routing for couriers: a divide and conquer strategy
    Joos Van Den Bergh (University of Antwerp)
    Co-authors: Kenneth Sorensen
    Abstract:
    The vehicle routing problems faced by courier companies are among the most challenging ones in existence. Unlike some other difficult routing problems, this is not so much due to the mathematical complexity of the routing problem, but to the enormous amounts of deliveries that need to be performed every day, and to a number of requirements that are generally overlooked in vehicle routing ap- plications. These requirements include the fact that customers that frequently require delivery should always be visited by the same driver, and the fact that sorting should start before all drop locations are known. More than a compu- tational challenge, this presents an organizational challenge to the planners at routing companies. Up to now, efficient tools for the planning of these challenging routing problems are lacking. In logistics optimization, the general assumption is that integrated decision making is better. A consequence is that the different optimization problems that appear in the operational planning of the supply chain should be combined into a single global optimization problem and solved as such. In this talk, we demonstrate that this approach does not work for large and complex courier routing problems. On the contrary: even integrating all the decisions in the vehicle routing problem faced by courier companies results in an organizational nightmare. Rather, courier companies successfully divide their vehicle routing problems in more manageable sub-problems that are solved independently. This has many organizational advantages, but presents additional challenges, such as the question on how the different optimization problems should be coupled. In this talk, we present the global routing problem faced by courier compa- nies, and present a technique to efficiently solve it by dividing it into several subproblems. We discuss the coupling of these subproblems and present some challenges for future research.

Thursday 14:00:00 Combinatorial optimization and IP applications
Room 138 - Chair: Q. Louveaux
  • Using mixed integer programming to win a cycling game
    Jeroen Beliën (hogeschool universiteit brussel)
    Co-authors: Dries Goossens, Daam Van Reeth, Liesje De Boeck
    Abstract:
    This paper presents an application of optimization modeling to win a popular cycling game. The application involves real-life data of today's cyclists and challenges the students because of the competition aspect. Since the developed optimization model contains features of knapsack problems, multiperiod inventory problems and integer programming modeling, it is perfectly suited as a concluding case study in an undergraduate operations research/management science course. Moreover, the application also sharpens the understanding of the working of stock exchange markets and is, therefore, also interesting for finance courses. The application was originally developed for an MBA operations research course focusing on spreadsheet modeling skills, but it can also be used in courses that focus on algebraic modeling of optimization problems.
  • Budget Constraint in Reforestation for Sediment Flow Minimization
    Pablo Vanegas (Katholieke Universiteit Leuven)
    Co-authors: Dirk Cattrysse, Jos Van Orshoven
    Abstract:
    This paper proposes to extend the heuristic method (CAMF) for locating optimal areas to be reforested in order to minimize sediment flow. The proposed approach restricts the selection of the locations according to a budget constraint instead of a predefined number of spatial units (CAMF-B). Thus the sediment minimization is modelled as a knapsack problem. To measure the performance of the heuristic its results are compared with the ones obtained with an Integer Programming (IP) formulation.
  • The generalized sequential ordering problem for laser cutting toolpath generation
    Reginald Dewil (Katholieke Universiteit Leuven)
    Co-authors: Pieter Vansteenwegen, Dirk Cattrysse
    Abstract:
    In toolpath generation for laser cutting, one tries to find the shortest path that cuts all required edges and does not violate precedence constraints. These constraints can originate from three different sources: islands, common cuts, and heat conduction. We propose to model this problem of generating toolpaths for sheet metal laser cutting as a more general case of the generalized traveling salesperson problem (GTSP) and the sequential ordering problem (SOP). The generalized sequential ordering problem (GSOP) can then be described as a GTSP with precedence constraints between cities and/or districts.
  • A real world 1D stock cutting problem: exact and heuristic algorithms
    Wim Vancroonenburg (KaHo Sint-Lieven)
    Co-authors: T. Wauters, G. Vanden Berghe
    Abstract:
    In a modern joinery shop floor, profiles are cut from different materials (e.g. wood, aluminium and plastics) with different properties (e.g. color). For constructing door and window frames, the profiles need to be cut under varying angles, in order to make assembly possible. The process of cutting materials also causes losses proportional to the width of the saw blade and dependent on the angle under which the material is being cut. Lastly, cutting stock generates leftover material that sometimes still has some value. Companies are interested in reusing these leftover pieces in order to minimize waste and to reduce costs. All these factors result in the real world stock cutting problem that we will present. We give a brief introduction to a comparative study of recent optimisation methods for this real world stock cutting problem. We will look at both exact and heuristic algorithms for solving this problem. Our first results show improvements of about 1% compared to currently used methods in practice.

Thursday 14:00:00 Nonconvex Optimization 2
Room 035 - Chair: R. Sepulchre
  • Discriminative Clustering for Image Co-segmentation
    Francis Bach (INRIA - Ecole Normale Superieure)
    Abstract:
    Purely bottom-up, unsupervised segmentation of a single image into two segments remains a challenging task for computer vision. The co-segmentation problem is the process of jointly segmenting several images with similar foreground objects but different backgrounds. In this work, we combine existing tools from bottom-up image segmentation such as normalized cuts, with kernel methods commonly used in object recognition. These two sets of techniques are used within a discriminative clustering framework: we aim to assign foreground/background labels jointly to all images, so that a supervised classifier trained with these labels leads to maximal separation of the two classes. In practice, we obtain a combinatorial problem which is relaxed to a continuous convex optimization problem, that can be solved efficiently for up to dozens of images (using appropriate manifold optimization techniques). We show that our framework works well on images with very similar foreground objects, which are usually considered in the literature, as well as on more challenging problems with objects with higher intra-class variations.
  • Fast Oriented Bounding Box Computation Using Particle Swarm Optimization
    Pierre Borckmans (UCL (INMA))
    Co-authors: Pierre-Antoine Absil
    Abstract:
    The problem of finding the optimal oriented bounding box (OBB) for a given set of points in R³, yet simple to state, is computationally challenging. Existing state-of-the-art methods dealing with this problem are either exact but slow, or fast but very approximative and unreliable. We propose an adaptation of the Particle Swarm Optimization (PSO) over the rotation group SO(3) using concepts ofdifferential geometry. The symmetry of the problem is also exploited. Numerical experiments show that the proposed algorithm outperforms existing methods, often by far.
  • The time-optimal helicopter trajectory is a circle segment
    Natalya Usotskaya (Maastricht University)
    Co-authors: A.Berger, A.Grigoriev
    Abstract:
    This talk addresses the problem of determining a time-optimal helicopter trajectory between two points in three-dimensional space without any obstacles. The absolute value of the helicopter speed decreases linearly in altitude, i.e., v(y) = max{v* - qy; 0}, where v* is the helicopter velocity at the ground level and q is the velocity loss per one meter altitude. Although intuitively one might think that the optimal trajectory is a straight line, in most of the cases this is not true. We show that the time-optimal trajectory is, in fact, a circle segment. First, we present a simple and intuitive proof of this theorem. Second, using Euler-Lagrange equations, we prove that the derived trajectory is the only time-optimal solution in the class of continuously differentiable trajectories.
  • BFO: a simple "brute-force" optimizer
    Philippe Toint (Fac. Universitaires Notre-Dame de la Paix)
    Abstract:
    A very simple algorithm is described for derivative-free unconstrained or bound-constrained mixed-integer nonlinear local optimization. Inspired by pattern search methods like MADS (Audet?), Coope and Price or the proposals by Torczon and co-authors, it proceeds by optimizing on successively finer grids, the grid being possibly constrained to a discrete set of values in some directions. It is also shown how the resulting BFO algorithm has been used to tune its own algorithmic parameters. Preliminary numerical experience is also discussed.

Thursday 16:10:00 Constraint programming models 2
Room 126 - Chair: P. Schaus
  • Failure Detection for the Bin-Packing Constraint
    Julien Dupuis (UCLouvain)
    Co-authors: Pierre Schaus, Yves Deville
    Abstract:
    In addition to a filtering algorithm, the Pack constraint introduced by Shaw uses a failure detection algorithm. This test is based on a reduction of the partial solution to a standard bin-packing problem and the computation of a bin-packing lower bound (BPLB) on the reduced problem. A first possible improvement on Shaw's test is to use a stronger BPLB. In particular, Labbé's lower bound was recently proved to dominate Martello's lower bound used by Shaw. A second possible improvement is to use a reduction different from the one introduced by Shaw. We propose two new reduction algorithms and prove that one of them theoretically dominates the others. All the proposed improvements on the failure test are evaluated using the Comet System.
  • Finite Domain Modeling and Solving in the Monadic Constraint Programming Framework
    Pieter Wuille (KULeuven)
    Co-authors: Tom Schrijvers
    Abstract:
    The Monadic Constraint Programming framework (MCP) provides an embedded domain specific language (EDSL) for Constraint Programming (CP). It supports multiple problem domains, solvers and variable types. Search is specified independently using composable search transformers. The framework contains a Finite Domain (FD) modeling subsystem which supports simple expressions and operators over integers, booleans and arrays. The model is solved effectively using one of many backends, including the C++ Gecode library.
  • Constrained Optimum Paths Problems with LS(Graph)
    Quang Dung Pham (UCLouvain)
    Co-authors: Yves DEVILLE, Pascal van HENTENRYCK
    Abstract:
    Constrained Optimum Path (COP) problems appear in many real-life applications, especially on communication networks. Some of these problems have been considered and solved by specific techniques which are usually difficult to extend. In this paper, we introduce a novel local search modeling for solving some COPs by local search. The modeling features the compositionality, modularity, reuse and strengthens the benefits of Constrained-Based Local Search. The proposed modeling approach is based on spanning tree in order to simplify the neighborhood computation. We also apply the modeling to the Resource Constrained Shortest Path (RCSP) problem and to the edge-disjoint paths problem (EDP). We show that side constraints can easily be added in the model. Computational results show the significance of the approach.
  • On expressing nurse rostering constraints as propositional satisfiability problems
    Tommy Messelis (KULeuven Campus Kortrijk)
    Co-authors: Stefaan Haspeslagh, Patrick De Causmaecker
    Abstract:
    This study focusses on the translation scheme to get from the Nurse Rostering Problem (NRP) to a general propositional SATisfiability problem (SAT). The idea is to assign a set of numbers to the shifts worked by an employee and then to express the constraints in terms of these numbers. In total, we will need O(n2) clauses and O(nlogn) variables to express our problem with n numbers as a general SAT problem. n is at most of the same size as the total number of time units in the planning horizon.

Thursday 16:10:00 Performance modeling
Room 130 - Chair: G. Janssens
  • Dynamic Performance Measurement and Evaluation: Will Bridging Paradigms Lead to Improved System Design?
    Konstantinos Triantis (Virginia Tech/Northern Virginia Center)
    Co-authors: Warren Vaneman, Kalyan Pasupathy
    Abstract:
    Within the domain of systems engineering, conceptual frameworks are used to assist engineers, managers, and policy makers to determine new or modified system designs. The designs are driven by requirements set typically by the users and are monitored using technical performance measures (TPMs). System designs are considered effective if they meet the pre-determined TPM values along with life-cycle cost and schedule targets. Therefore, in terms of measuring and assessing system design, one would expect synergy between the systems engineering and the performance measurement literatures. One possible synergistic thrust between these two bodies of literature is the modeling and assessment of dynamic system performance. There are many reasons for focusing on the concept of dynamic performance one of them being the effective management of the design process during change initiatives. In spite of the exceptional guidance available in the literature, the activities (e.g., the introduction of new technologies, the implementation of new training programs, etc.) that take place during transitional periods are often the most disruptive, and contain the furthest reaching performance and cost consequences, of any periods in the life-cycle of systems. One of the reasons for this unfortunate result is the failure of change techniques and methods to identify an efficient path of transition, from the old way of doing business, to a new performance paradigm. The organization’s ability to master these transient periods is fundamental to achieving steady state operations more efficiently, thus reducing losses due to sub-optimal performance. An approach to that can directly account for dynamic performance measurement and evaluation during these transitional periods is the dynamic performance measurement model (DPEM). DPEM explicitly considers causal relationships within an operational environment and allows for the testing of different design alternatives. The primary objective of this paper is to present this approach and review it in relation to other dynamic measurement approaches found in the literature. Examples (the provision of electric power, data archival and maintenance and the provision of social services within service supply chain) are discussed that illustrate the implementation of the approach. Another objective of this paper is to discuss why dynamic considerations can potentially lead to improved system designs. A tertiary objective is to outline specific future modeling and implementation challenges that require further research.
  • Measuring the Technical Efficiency of Airports in Latin America and the Caribbean
    Sergio Perelman (Université de Liège)
    Co-authors: Tomas Serebrisky
    Abstract:
    This paper studies the technical efficiency and its determinants of airports in Latin America (LAC). The evolution of productive efficiency in the LAC Region has seldom been studied, mainly due to lack of publicly available data. Relying on a unique dataset that was obtained through questionnaires distributed to airport operators, we use Data Envelopment Analysis (DEA) methods to compute an efficient production frontier and compare the technical efficiency of LAC airports relative to airports around the world. In a second stage we estimate a truncated regression to study the drivers of observed differences in airport efficiency. According to our results, institutional variables (private/public operation), the socioeconomic environment (GDP level) and airport characteristics (Hub, share of commercial revenues) matter to explain airport productive efficiency. Finally, we also estimate total factor productivity changes for LAC airports for the period 1995-2007. LAC is a region that has implemented a wide variety of private sector participation schemes for the operation of airports since the mid 90s. Our results show that private operators have not had higher rates of total factor productivity change.
  • Developments in Freight Modeling
    Justyna Bakowska (Hasselt University)
    Co-authors: An Caris, Katrien Ramaekers,Gerrit Janssens, Tom Bellemans
    Abstract:
    Activity-based modeling is the latest method of travel and freight demand forecasting. There are several national activity-based freight models created in Europe. These logistics models aim at a description of trade-offs between transport and inventory levels by defining special patterns for commodities flows, changing the usage of infrastructure, the cost of freight movements and the economic impact of freight policies.

Thursday 16:10:00 Scheduling
Room 138 - Chair: K. Sorensen
  • Using Lagrangian relaxation to solve a dock assignment problem
    Lotte Berghman (Katholieke Universiteit Leuven)
    Co-authors: Roel Leus
    Abstract:
    This paper presents a model for a dock assignment problem based on the situation encountered in a practical case, where a schedule is needed specifying the starting time and the assigned resource for each activity. We propose a Lagrangian heuristic for solving this scheduling problem, as follows. We implement Lagrangian relaxation of a time-indexed formulation for a three-stage flexible flow shop representing the dock assignment problem. Relaxed solutions are subsequently rendered feasible by means of an enumerative tree search.
  • A Hybrid Algorithm for Bi-objective Flowshop Scheduling
    Jérémie Dubois-lacoste (IRIDIA, CoDE, Université Libre de Bruxelles)
    Co-authors: Manuel López-Ibáñez and Thomas Stützle
    Abstract:
    In this work we tackle bi-objective flowshop scheduling problems that arise from the pairwise combination of the minimization of: makespan, sum of flowtimes and total (weighted) tardiness. We propose a new high performing algorithm which combines two general frameworks for multi-objective optimization. Comparison with existing algorithms shows that our final algorithm clearly improves upon the state-of-the-art for the three bi-objective problems.
  • A hybrid learning and combinatorial optimization approach for automotive maintenance scheduling
    Tony Wauters (KaHo Sint-Lieven)
    Co-authors: Jannes Verstichel, Katja Verbeeck, Greet Vanden Berghe
    Abstract:
    In this abstract we present methods for solving a real world task scheduling problem, in the automotive maintenance sector. Tasks have to be assigned to personnel taking into account their skills, available working periods and other constraints. We present a model for this complex and highly constrained scheduling problem, and an approach for solving it.
  • A Variable Neighbourhood Search metaheuristic for scheduling the hot rolling operations at a steel mill
    Kenneth SÖrensen (Universiteit Antwerpen)
    Co-authors: Dirk Cattrysse
    Abstract:
    We develop a variable neighbourhood search algorithm for scheduling the hot rolling operations at a steel mill. Our algorithm successfully determines which slabs to schedule and in which order.

Thursday 16:10:00 Planning under uncertainty
Room 035 - Chair: R. Leus
  • Financial flow modeling for non-state pension funds
    Maria Govorun (ULB, Belgium)
    Abstract:
    We present a profit and loss model construction for pension fund organizations. The model allows to construct different pension insurance programs, to project them to the future and to obtain financial results. The model constructs the dynamics of a client population and calculates all insurance quantities. Another function of the model is to provide a sensitivity analysis after getting a financial result. Thus, the model helps in decision making problems, especially in creating new pension insurance schemes.
  • R&D project planning with multiple trials in uncertain environments
    Stefan Creemers (K.U.Leuven)
    Co-authors: Roel Leus
    Abstract:
    We study project scheduling when individual activities carry a risk of failure, and where an activity's failure may lead to the project's overall failure. In the project planning and scheduling literature, this technological uncertainty has typically been ignored and project plans are developed only for scenarios in which the project succeeds. To mitigate the risk that an activity's failure jeopardizes the entire project, more than one alternative may exist for obtaining certain results, and these alternatives can be implemented either in parallel or sequentially, allowing to model the pursuit of alternative technologies.
  • Impact of Demand Unconstraining and Dependency on Airlines Revenue Performance
    Sandeep Karmarkar (India Institute of Management, Ahmedabad)
    Co-authors: G. Dutta, T. Bandyopadhyay
    Abstract:
    Accurate forecasts are crucial to revenue management. Historical demands can sometimes be censored. In this paper, we try to unconstrain the correlated censored demands using EM algorithm. These forecasts are used to find booking limits under uncorrelated and correlated demands for two fare classes. A comparison across four cases: unconstrained Vs constrained, and dependent Vs independent demands shows that revenue up to 2% can be increase.

Friday 09:00:00 Metaheuristics
Room 126 - Chair: J. Teghem
  • Combining metaheuristics and exact methods to solve the multiobjective multidimensional knapsack problem
    Thibaut Lust (Faculté Polytechnique de Mons)
    Co-authors: Jacques Teghem
    Abstract:
    We study the multiobjective and multidimensional version of the knapsack problem (MOMKP). Few exact methods have been developed for solving this problem. We present here the hybridization between an exact method and the 2PPLS method (Lust and Teghem, 2009) in order to solve the MOMKP. The neighborhood used in 2PPLS is based on an exploration realized by a method allowing to solve exactly or heuristically small size MOMKP instances. The results of the comparison with state-of-the-art methods are reported.
  • Hyper-heuristics learning a varying set of low-level heuristics
    Mustafa Misir (KaHo Sint-Lieven - Katholieke Universiteit Leuven)
    Co-authors: Katja Verbeeck, Greet Vanden Berghe, Patrick De Causmaecker
    Abstract:
    The main motivation behind using hyper-heuristics is related to providing generality for solving different combinatorial optimisation problems. Hyper-heuristics perform on a higher level than traditional search and optimisation strategies. They operate on a set of solution approaches (i.e. low-level heuristics) rather than on the set of solutions directly. The performance of heuristics can vary from a problem(-instance) to a problem(-instance). They may even behave differently in various search regions of one problem. Therefore, using a management mechanism on top of a number of search algorithms can help to find the most appropriate heuristics to apply. This kind of management can be handled by choice hyper-heuristics. A simple choice hyper-heuristic consists of a heuristic selection mechanism and a move acceptance mechanism. While a heuristic selection mechanism chooses heuristics for applying to a solution(s) at hand, a move acceptance mechanism concludes whether the new solution(s) is good enough to accept. In this study, we focus on the heuristic selection part. We propose a method for determining the set of best performing heuristics that should be used during different phases of a search. This subset is formed based on the relative improvement per execution time of each heuristic. At the end of each phase, heuristics are ranked according to some quality related values (quality index). That is, the quality index of the best performing heuristic is set to n and for the worst heuristic, this value is set to 1. The index value of heuristics that do not provide any improvement are automatically set to 1. All heuristics which have a quality index value lower than average are excluded from the search for the next phase. In addition, when the set of best performing heuristics becomes too small, all the excluded heuristics are entered in the heuristic set again, and the algorithm repeats itself. We apply our new hyper-heuristic approach to a set of home care scheduling problem instances.
  • Using the PlayStation3 for speeding up metaheuristic optimization
    Sofie Van Volsem (Ghent University)
    Co-authors: S. Neirynck
    Abstract:
    For the problem of optimizing inspection strategies in multi-stage production systems, a metaheuristic consisting of an evolutionary algorithm with embedded simulation was developed in previous works of Van Volsem et al. In this work, in an effort to reduce computation time, the metaheuristic was adapted for computation on the Cell Broadband Engine (PlayStation3). Also, the potential of SIMD computation for speeding up the random number generation process is investigated.
  • No-Wait Scheduling of a Single-Machine to Minimize the Maximum Lateness
    Imed Kacem (UNIVERSITY PAUL VERLAINE METZ)
    Co-authors: Hans Kellerer
    Abstract:
    This paper is the first attempt to successfully design efficient approximation algorithms for the single-machine maximum lateness minimization problem when jobs have different release dates and tails (or delivery times) under the no-wait assumption (i.e., the schedule cannot contain any idle-time between two consequetive jobs on the machine). Our work is motivated by interesting industrial applications to the production area. Our analysis shows that the algorithms of Potts and Schrage can lead to the same worst-case performance ratios obtained for the relaxed problem without the no-wait contraint. Then, we propose a polynomial-time approximation scheme with efficient running time for the considered problem.

Friday 09:25:00 Production and distribution (9:25)
Room 130 - Chair: Y. Arda
  • Aggregate production-distribution planning with shared production resources
    Wout Dullaert (University of Antwerp)
    Co-authors: B. Raa
    Abstract:
    This paper considers the aggregate production-distribution planning problem for a producer of injection-moulding plastic products that has multiple plants and warehouses across Europe. The goal is to quantify the savings potential offered by exchanging moulds between the plants. A mathematical model is presented that solves relatively small problem instances to optimality within reasonable computation times. For the larger problem instances, a two-phase heuristic solution approach is presented.
  • Variable Neighborhood Approaches For a Multi-Echelon Capacitated Location-Distribution Problem
    Frederic Semet (Ecole Centrale de Lille)
    Co-authors: B. Gendron, P.-V. Khuong
    Abstract:
    We consider a multi-echelon location-distribution problem arising from an actual application in fast delivery service. We present variable neighborhood heuristic methods to solve large-scale instances of this problem. Computational results are presented on a large set of instances derived from an actual application.
  • Stackleberg game in transportation
    Guillaume Amand (Université de Liège)
    Abstract:


Friday 09:00:00 Multiple criteria
Room 138 - Chair: R. Bisdorff
  • Regional Development in a Well-Being Economy: The Case of Lithuania
    Willem K. Brauers (University of Antwerp)
    Co-authors: R. Ginevicius, R. Bisdorff
    Abstract:
    Willem Karel M. Brauers 1) and Romualdas Ginevicius2) 1) Faculty of Applied Economics and Institute for Development Policy and Management, University of Antwerp, Belgium 2) Department of Enterprise Economics and Business Management, Vilnius Gediminas Technical University, Vilnius-40, Lithuania ABSTRACT The computation of a Regional Income, being an exponent of the welfare economy, is not sufficient for the measurement of the well being of a regional population. The well-being economy goes further. In the well-being economy, each individual would have to feel good concerning material wealth, health, education, all kind of security and concerning the environment. With other words, multiple objectives have to be fulfilled. Moreover, these different multiple objectives are expressed in different units. Weights are most of the time used to equalize these different units. However, introduction of weights means also introduction of subjectivity. In order to avoid this dilemma, the internal mechanical solution of a ratio system, producing dimensionless numbers, is preferred. In addition, the use of the obtained ratios creates the opportunity to come to a non-subjective reference point theory. This double approach is called the MOORA Method (Multiple Objectives Optimization by Ratio Analysis). The choice and importance of the objectives is also non-subjective if all stakeholders involved come to an agreement. This theory is applied on the different regions of Lithuania. A redistribution of income has to take place from the well being Lithuanian regions to the poorer regions, but under limiting conditions and for well defined and eventually controlled projects.
  • Evaluation of multi-criteria techniques for project portfolio management
    Sylvie Busschaert (University of Antwerp)
    Abstract:
    Project Portfolio Management (PPM) is a relatively new management disci- pline which helps organizations to compose and manage their project portfolios. Although PPM covers a much wider area more than project portfolio selection, the latter will be the focus of this presentation. After all, the selection process determines the investment decisions — and thus the future profitability — of every organization. This awareness is also growing in business. Therefore, orga- nizations are gradually taking steps to make their current selection procedures more objective. There is a wide variety of tools and techniques available to assist organiza- tions with this activity. Tools and techniques for graphical representations and financial models are the most frequently used. However, the complexity of this decision problem demands the application of even more complex techniques be- longing to the field of Multicriteria Decision Making (MCDM). These methods provide the decision maker(s) not only with the possibility to incorporate multiple criteria into the selection process, but allow in addition to model the preferences of this decision maker in case of lacking information. In this talk, we thoroughly compare three MCDM-techniques: Weighted Scor- ing, The Analytical Hierarchy Process (AHP) and ARGUS. Since a ‘good’ selec- tion method is user-friendly as well as methodologically correct, the conformance of each of these methods with the following four parameters is examined: (1) transparency, (2) ease of manipulation, (3) time required for execution and (4) respect for scales. Although user-friendliness is generally perceived as the most important fac- tor to define the difference between a ‘good’ and a ‘bad’ selection method, this presentation proves the importance of the methodological characteristics of the method applied. A real case-example of a multi-criteria project selection study executed at the Flemish Government illustrates that in case of disrespect of mea- suring scales, one cannot state with certainty that the decision results are reliable and not coincidental. Although none of the tested methods for project portfolio selection dominates the other, i.e. no method complies to all methodological and operational criteria, we find that ARGUS represents the best compromise between these two worlds. The abstraction of the ARGUS-algorithm makes manipulation rather difficult, the availability of a user-friendly software shortens the time for execution signifi- cantly and, even more importantly, it is the only method of three which respects the ordinal scale of decision variables. However, due to the lack of transparency, the acceptance of ARGUS in practice remains a big question mark.
  • Multicriteria decision making in a multi-level manpower system
    Marie-anne Guerry (Vrije Universiteit Brussel)
    Abstract:
    Markov manpower planning models have extensively been analyzed in the past in order to find an optimal personnel strategy for which the manpower system evolves towards a desirable stock vector. In general, those models support top level strategic decision making and only consider overall stocks. None of these models take into account interactions among different organizational decision levels. In this paper, a multi-level manpower planning model is developed that considers, besides the desirable stock vector at overall level, proposals for the departmental stocks from lower organizational levels. For a multi-level manpower system, the concept of control by recruitment and interdepartmental transitions is introduced and attainability of the stock vectors at departmental level is examined under this condition. Moreover, criteria are formulated under which stock vectors at departmental level are acceptable for the top and an algorithm is presented in order to find realizable approximations, i.e. stock vectors that are attainable as well as acceptable and for which the discrepancy with the proposed stock vector is minimized. Finally, a multi-criteria algorithm is described to determine attainable and acceptable stocks that at overall organizational level are a compromise between the proposal from the top and the proposals from the departments. The compromise for the desired stocks can be formulated in terms of an optimal recruitment strategy.
  • Sensitivity analysis of the additive model in data envelopment analysis while inputs and outputs are fuzzy data
    Mahsa Faizrahnemoon (Islamic Azad University, Tehran Science and Research Branch)
    Co-authors: A. Davoodi
    Abstract:
    Data envelopment analysis (DEA) requires input and output data to be precisely known. This is not always the case in real applications. Sensitivity analysis of the additive model in data envelopment analysis is studied in this paper while inputs and outputs are symmetric triangular fuzzy numbers. Sufficient conditions for simultaneous change of all outputs and all inputs of an efficient Decision making unit (DMU) which preserves efficiency are established.changes are exerted on the core of symmetric triangular fuzzy numbers so that the value of inputs increase and the value of outputs decrease.

Friday 09:25:00 Stochastic models (9:25)
Room 035 - Chair: L. Esch
  • Extra-Small Scenario Trees for Multistage Stochastic Programming
    Boris Defourny (University of Liege)
    Co-authors: Damien Ernst and Louis Wehenkel
    Abstract:
    We use supervised learning to infer decision policies from the decisions extracted from a scenario tree approximation to a multistage stochastic program. This makes it possible to perform efficiently a reliable out-of-sample validation of the decisions. Since good trees can be identified efficiently, we tune and validate on problems with many stages a method for generating extremely small scenario trees having a sufficient probability of being a smart approximation to the original problem.
  • Extinction Probability of a Branching Process in a Markovian Random Environment
    Sophie Hautphenne (Université Libre de Bruxelles)
    Co-authors: Guy Latouche
    Abstract:
    We study a particular class of branching processes, called Markovian binary trees, whose parameters are controlled by a Markovian random environment. In that context, the individuals of the branching process do not behave independently of each others, which makes the analysis of the extinction probability much more involved than in the standard case without random environment. We propose two algorithmic methods with a probabilistic interpretation to compute the extinction probability.
  • A clustering approach to estimate route travel time distributions
    Wouter Charle (K.U.Leuven)
    Co-authors: Francesco Viti, Chris M.J. Tampère
    Abstract:
    Accurate network travel time estimation is today one of the most challenging problems in trac theory. The mainstream research on travel time estimation concentrates on the estimation of mean route travel time or some measures of travel time reliability (i.e. 10th and 90th percentiles). However, given the growing detail in travel time measurements it is also possible to estimate route travel time distributions. This serves a broader spectrum of applications and provides more useful information. For this goal, this research presents a method to calculate the travel time histogram of a route, based on link travel time observations. The aim of this new method is to allow the development of an advanced route planner that is able to optimize route choice reliability. The notion of reliability is de ned by the end-user of the route planner and highly depends on the properties of the route travel time distribution. Central in the development of the route planner is the distinction between (cheap) o -line storage and computations and (expensive) on-line computations. For that it is important to minimize the on-line computational e ort of calculating a route travel time histogram. The method described in this study can be deployed beyond route planning applications and the eld of trac management. Other applications can be for instance the location optimization of logistic hubs (i.e. airports, packaging services, . . . ) or public services (i.e. hospitals, re stations, . . . ) and the evaluation of network performance.

Friday 11:00:00 Constraint programming and Supply Chain Management
Room 126 - Chair: Y. Deville
  • A CLP engine for product configuration
    Dario Campagna (University of Perugia)
    Abstract:
    Product configuration systems are an emerging software technology that supports companies in deploying mass customization strategies. We describe an original reasoning engine, written in SWI Prolog, that we developed for a commercial configuration system. We devote a special attention to the relevant process of product configuration and assignment revision, and we point out the research directions we are following to extend our work.
  • Just-In-Time Scheduling with Constraint Programming
    Jean-noël Monette (Université catholique de Louvain)
    Co-authors: Yves Deville, Pascal Van Hentenryck
    Abstract:
    This paper considers Just-In-Time Job-Shop Scheduling, in which each activity has an earliness and a tardiness cost with respect to a due date. It proposes a constraint programming approach, which includes a novel filtering algorithm and dedicated heuristics. The filtering algorithm uses a machine relaxation to produce a lower bound that can be obtained by solving a Just-In-Time Pert problem. It also includes pruning rules which update the variable bounds and detect precedence constraints. The paper presents experimental results which demonstrate the effectiveness of the approach over a wide range of benchmarks.
  • Tank allocation for liquid bulk vessels using a hybrid constraint programming approach
    Rowan Van Schaeren (Antwerp Maritime Academy)
    Abstract:
    The chemical industry is characterized by a very strong competitive environment. This leads to an increased pressure on providing consistent quality, fast delivery and cost-cuttings. Chemicals are transported all over the world in special, dedicated vessels. These chemical tankers form an important aspect of this liquid bulk chemicals trade and the number of chemical tankers available on the market increases steadily. Chemical tankers distinguish themselves from other tankers in the large number of separate cargo tanks available to load cargo. Some chemical tankers have over 30 individual tanks. This allows for many different cargoes to be transported simultaneously, but requires that each cargo tank has its own pump and piping system to connect with the shore in order to prevent mixing or contaminating individual cargoes. This also has an important impact on the planning of cargoes on board of these chemical tankers as cargo interactions can result in dangerous situations. Almost all chemical products can be considered dangerous one way or the other (being labeled as e.g. corrosive, marine pollutant, toxic ...). These products must therefore be stored in accordance with stringent regulations. Concerning stowage the most important criterion is segregation. Segregation is not only important between the different products themselves (certain products like e.g. caustic soda and sulfuric acid cannot be stowed in adjacent tanks) but also with respect to the tank coatings that protect the tanks from products stored in them. In addition to this, the vessel's stability constraints complicate the capacity planning even further. Because of the computational complexity of mathematically optimizing the problem, loading plans are generally generated manually by the vessel planners and checked by a stability program afterward. Because of the multitude of constraints, regulations and ``good practices'', it is very difficult to generate high quality loading plans manually. Optimization methods capable of handling these side constraints and generating high quality solutions can therefore greatly support vessel planners and free up time for handling non-standard scheduling issues. Academic literature on the tank allocation problem (TAP) or operational planning is limited. Most of the conducted research considers both the load planning and vessel routing of chemical tankers. However, only a few deal with segregation and stability constraints simultaneously in their load planning, which are essential in real-life applications. The literature review illustrates the difficulties of simultaneously addressing both planning and routing aspects for loading chemical vessels even if no or only simplified ship stability constraints are taken into account. As this research aims at modeling the stability of chemical tankers in full detail, we start by focusing on the load planning part of the problem. Although the loading aspect of chemical vessels can be addressed successfully by mixed integer programming, constraint programming (CP) looks more promising for developing an integrated model in the future, in which both the scheduling of several ports aspect and the load planning of cargo aspect are combined. In the proposed model, CP is used for making the allocations of cargo to the tanks and LP is dynamically used as a final constraint checker for the ship stability at the potential solution nodes of the search tree. More precisely, at a potential solution node of the CP search tree, every tank is empty or is allocated to a cargo ensuring that there is enough volume for each cargo while satisfying segregation constraints. The LP is then called as a subroutine to check and optimize the stability constraints deciding how many tons of the cargo will be allocated to each tank by considering the cargo allocation as given. If the stability requirements cannot be satisfied by the LP the node is simply discarded. This simple hybridization between CP (Master) and LP (slave) is used in a Branch and Bound scheme to maximize the unused free space of the tanks. Computational results with Comet show that this hybrid CP-LP approach is an interesting path for solving the operational planning problem of chemical tankers (load planning and scheduling). Computational times prove to be operationally acceptable with the proposed model. Our current research focuses on finding a good CP search algorithm, as this is critical concerning computation times. Integrating the ship routing and scheduling into the operational planning will be the topic of future research.
  • Effective Estimation-based Stochastic Local Search Algorithms for Stochastic Routing Problems
    Thomas Stützle (Université Libre de Bruxelles (ULB) )
    Co-authors: Prasanna Balaprakash, Mauro Birattari, Marco Dorigo
    Abstract:
    The probabilistic traveling salesman problem (PTSP) is a paradigmatic NP-hard routing problem under uncertainty. In this talk we give an overview of the steps taken to engineer effective stochastic local search algorithms for the PTSP and how the algorithms were extended to another stochastic routing problem with stochastic demands. Our final algorithms outperform the previous state-of-the-art algorithms by quite a large margin with respect to computation time and solution quality.

Friday 11:00:00 OR in health management
Room 130 - Chair: P. De Causmaecker
  • Modelling questions in nurse rostering
    Burak Bilgin (Kaho Sint-Lieven)
    Co-authors: Patrick De Causmaecker, Greet Vanden Berghe
    Abstract:
    On an abstract level, the nurse rostering problem is a straightforward mathematical problem. The number of assignments must fulfill the coverage constraints and the assignments of a nurse need to satisfy the employment constraints. However, in practice, neither the problem nor the constraints are defined with the mathematical models in mind. The employment constraints are a result of collective bargaining on conditions of employment. The coverage constraints represent the staff requirements as they are perceived by the hospital management. Consequently, the resulting problem description and constraints are expressed in a verbal language, involving numerous exceptions for each rule. Most of the time, academics simplify the problem description and constraints in order to translate them to mathematical models. Although the resulting models are straightforward to be analysed and solved, the solutions obtained with simplified models do not satisfy the requirements of the real world response groups. As a result, a gap between theory and practice arises, which prevents the utilisation of the academic results in the real world settings. The nurse rostering literature is mainly focused on the solution methods. The models used in the nurse rostering studies are often not examined in great detail. The first step to bridge the gap between theory and practice is to spot the points where the academic models fail to represent the real world setting properly. The second step is to propose modelling practices that represent the real world problems with a higher accuracy.
  • Simulation study of outpatient scheduling with unpunctual patients
    Thomas Demoor (Ghent University)
    Co-authors: Dieter Fiems and Herwig Bruneel
    Abstract:
    We focus on patient unpunctuality and handle patients in order of arrival. This emerges when patients register at a reception desk and the doctor attends to patients in the registered order. Monte Carlo simulation is used to evaluate different appointment rules proposed in literature and to compare them to a schedule with fixed target mean patient waiting time. Our simulation results indicate that considerable patient waiting time savings can be attained at minimal doctor idle- and overtime.
  • Binary matrix decompositions without tongue-and-groove underdosage for radiation therapy planning
    Céline Engelbeen (Université Libre de Bruxelles)
    Co-authors: Antje Kiesel
    Abstract:
    In the present paper we consider a particular case of the segmentation problem arising in the elaboration of radiation therapy plans. This problem consists in decomposing an integer matrix $A$ into a nonnegative integer linear combination of some particular binary matrices called segments which represent fields that are deliverable with a multileaf collimator. For the radiation therapy context, it is desirable to find a decomposition that minimizes the beam-on time, that is the sum of the coefficients (beam-on time problem) of the decomposition. This minimization problem under the condition that the used segments have to respect the tongue-and-groove constraint has already been studied. There exist exact algorithms for particular cases as well as heuristics for the general one. But the complexity of the problem is still unknown. We prove that this last problem is polynomially solvable for binary input matrices and provide a polynomial procedure that finds such a decomposition with minimal beam-on time.
  • Evaluating the impact of case mix decisions on capacity utilizations through discrete-event simulation
    Guoxuan Ma (K.U. Leuven)
    Co-authors: Erik Demeulemeester
    Abstract:
    The performance of the health care sector heavily depends on the capacity structure, the case mix decision and the variability in the system, both from the supply side and the demand side. In this study, a simulation analysis is conducted to investigate how the optimal case mix decisions will affect the capacity utilizations and the patient flows under certain variability conditions, especially, the variation of the length of stay is considered and the bed occupancy rate is explored specifically.

Friday 11:00:00 Rankings and importance indices
Room 138 - Chair: JL. Marichal
  • Score-based bibliometric rankings of authors
    Thierry Marchant (Ghent University)
    Abstract:
    Scoring rules (or score-based rankings or summation-based rankings) form a family of bibliometric rankings of authors such that authors are ranked according to the sum over all their publications of some partial scores. Many of these rankings are widely used (e.g., number of publications, weighted or not by the impact factor, by the number of authors or by the number of citations,). We present an axiomatic analysis of the family of all scoring rules and of some particular cases within this family.
  • Assessing the evolution of a situation based on GIS
    Stephane Aimé Metchebon Takougang (UMONS, Faculté Polytechnique de Mons)
    Co-authors: M. Pirlot
    Abstract:
    In land management and other applications, it often occurs that a map representing the region under study is partitioned in homogeneous zones (with respect to some criteria). The state of each zone of a map may evolve with time. Alternatively, its evolution may result of the application of policies that promote changes in the behavior, tradition or practice of the populations. In order to assess the efficiency of a policy it is thus useful to develop methods that allow to compare different multicriteria assessments of the zones of the same map.
  • Weighted Banzhaf interaction indexes and weighted least squares
    Pierre Mathonet (University of Luxemboug, )
    Co-authors: JL. Marichal
    Abstract:
    In cooperative game theory, there exist several notions of power indexes, for instance the Banzhaf and the Shapley power indexes. Banzhaf and Shapley interaction indexes were introduced to measure the interaction of coalitions of players in a given game. We generalize an alternative definition of the Banhzaf index as the solution of a least squares approximation problem. Considering weighted least squares problems, we obtain generalizations of this interaction index and present their properties.
  • Measuring the interactions among variables of functions over the unit hypercube
    Jean-luc Marichal (University of Luxembourg)
    Co-authors: P. Mathonet
    Abstract:
    By considering a least squares approximation of a given square integrable function f on the unit n-dimensional hypercube by a multilinear polynomial of a specified degree, we define an index which measures the overall interaction among variables of f. This definition extends the concept of Banzhaf interaction index introduced in cooperative game theory. We discuss the main properties and a few applications of this new interaction index.

Friday 11:00:00 Queueing
Room 035 - Chair: S. Wittevrongel
  • Modelling data traffic performance in file servers: session-based arrivals
    Bart Feyaerts (Ghent University)
    Co-authors: Stijn De Vuyst, Sabine Wittevrongel, Herwig Bruneel
    Abstract:
    Abstract Contemporary communication networks are faced with increasingly heterogeneous traffic characteristics. In order to get reliable predictions about the network performance, adequate models for both data traffic and network components are indispensable. In this paper we focus on the session-based arrival process, a novel model for data traffic. This model considers users that can start and end sessions, during which data are transported over the network. We then use the model to obtain analytical and numerical results for the mean delay of a session in a network buffer. 1 Introduction Packet buffers are crucial components in many communication networks, where they provide for temporary storage of data packets. A sound understanding of these buffers and how they behave, is therefore crucial to study the network performance as a whole. An essential factor of the performance measures of a packet buffer, is the nature of the arrival process that generates the data packets. Session-based arrival streams form a new approach for modelling data traffic in modern communication networks. Users from an infinite population can start and end sessions, during which data are sent over the network. In this paper, we focus on a packet buffer, modelled as a discrete-time, single-server queueing system with infinite buffer capacity, geometric service times and session-based arrivals. Per time slot, each of the sessions generates a random but strictly positive number of information packets. The sessions all last for a random, but yet again, strictly positive number of time slots. This session-based packet generation scheme counts as a generalization of the train arrival process, where sessions (in this context referred to as messages) have a fixed bandwidth of 1 packet per slot. All data traffic arriving to the buffer can be divided into an arbitrary number T of session types. Each of these session types is characterized by three stochastic components. The session generation process of a particular type describes the number of new sessions of that type during a random slot. The session bandwidth denotes the number of packets generated by a session during a random slot. Finally, the session length corresponds to the number of slots a session lasts. 2 Description of the analysis In previous work [1], a Markovian system state description with an infinite-length system state vector was constructed. This vector contains the buffer content at the end of a certain slot and for each session type, the number of active sessions during that slot, grouped by the amount of slots the sessions are already active. Also the steady-state probability generating function of this system state vector has been obtained, as well as analytical expressions for the mean buffer content and the mean packet delay. Based on these preliminary results, we now investigate the mean value of the session delay. The delay of a session is defined as the integer number of slots, starting at the end of the slot in which the session's first packet arrives to the buffer, until the end of the slot in which the session's final packet leaves the system. The first step is then to obtain the mean session delay of sessions of a given type and length. This derivation is very different for single-slot and multiple-slot sessions. In the next step we produce the mean session delay of sessions of a given type by taking the sum of the conditional means obtained in the former step, weighted over the corresponding session length probability. In the final step, an analogous weighted sum yields the overall mean session delay. Numerical examples show both some intuitive and some more intriguing results. As expected, an increasing system load leads to an increasing mean session delay; this result is also obtained for an increasing variance in the arrival process. A more counterintuitive result is that the mean session delay can be smaller than the mean packet delay for some configurations. This can occur when there is unbalanced traffic: frequent few-packet sessions in combination with unfrequent many-packet sessions where the major part of the packets arrive during the unfrequent sessions. Although these unfrequent sessions have little effect on the mean session delay, they have a crucial effect on the mean packet delay. 3 Application A possible application of our model is to study the behaviour of the output buffer of a file server. Considering each file transfer as a single session, the traffic to such an output buffer can be well described by our session-based model. Since the model allows for general distributions of both the session lengths and the session bandwidths, it enables us to take into account actual traffic characteristics, as observed from a real traffic trace. References [1] S. Wittevrongel, S. De Vuyst and H. Bruneel, Analysis of discrete-time buffers with general session-based arrivals, Proceedings of ASMTA 2009 (Madrid, June 2009), Lecture Notes in Computer Science, 2009, vol. 5513, pp. 189-203.
  • Generalization of preemptive and non-preemptive priority queues
    Joris Walraevens (Ghent University - UGent)
    Co-authors: Tom Maertens. Herwig Bruneel
    Abstract:
    Priority scheduling is a hot topic in queueing theory. Priority queues are categorized in two types, preemptive and non-preemptive ones. In the first type, the service of a lower-priority service is interrupted when a new higher-priority customer arrives, while this is not the case in the second. We propose a priority scheduling that generalizes and lies in between these two extremes. Simulation studies show a performance cost gain of about 10% as opposed to the two extremes.
  • Queueing analysis of outpatient scheduling in health care
    Stijn De Vuyst (Ghent University)
    Co-authors: Dieter Fiems (first author), Stijn De Vuyst, Herwig Bruneel.
    Abstract:
    We present a queueing analytic approach to assess the performance of outpatient appointment scheduling in health care facilities. A session of K consecutive appointments is considered, that are handled by a single physician. All patients are assumed to be punctual and their consultation times constitute a sequence of independent random variables. Our results include expressions for the moments of the waiting times of the patients as well as the moments of idle times and overtime of the physician.
  • Stochastic Hybrid Simulation
    Ben Lauwens (Royal Military Academy)
    Abstract:
    This work deals with an extension to the hybrid simulation paradigm, i.e. the combination of event-driven simulation and analytical modelling, applied to packet telecommunication networks. In order to speed up the simulation only a small part of all packets, the foreground traffic, is processed in an event-driven way. On each arrival of a foreground packet, the waiting time of the packet is sampled from the virtual waiting time distribution function of the combined foreground and background traffic. This distribution function is stochastically modelled by the exact large deviations asymptotic of the virtual waiting time in a many sources regime. This novel methodology is not only valid for wired point-to-point queueing networks having a fixed transmission capacity, but it can also be applied to queueing networks for which the transmission capacity varies with the traffic load of all the elements in the network. The results obtained by the stochastic hybrid simulator are compared to full-blown event-driven simulations. An important reduction in simulation run-time is gained without sacrificing accuracy.

Friday 15:10:00 Optimization software
Room 126 - Chair: E. Loute
  • Fast optimization modeling with AIMMS
    Frans De Rooij (AIMMS)
    Abstract:
    AIMMS is a system for developing optimization models and complete decision support applications. Universities use AIMMS to teach OR without having to teach a programming language. Researchers benefit from AIMMS’ algorithmic capabilities, such as column generation, outer approximation for MINLP, multi-start NLP, stochastic programming, Benders decomposition, and soon robust optimization. Industrial companies use AIMMS for models integrated with the existing IT and deployed to many end-users.

Friday 15:10:00 Integrated operations planning
Room 130 - Chair: B. Raa
  • The mobile repairman problem: classification of existing models
    Jasmine Buré (KULeuven)
    Co-authors: Pieter Vansteenwegen, Dirk Cattrysse
    Abstract:
    The research areas (maintenance, inventory and routing) are already extensively investigated and thousands of models can be found in literature. Nevertheless, the existing papers and models, generally don't take into account the strong interrelationship between those research areas. Papers including all three factors are, to the best of our knowledge, non-existing and only few papers on the intersection of two of the three domains were published. As a starting point of dealing with this problem, a classification of models, taking into account the interaction of those research domains, was made. The goal of this classification is to provide insight in all known relations between maintenance, inventory and routing. Furthermore, a division in decision-making levels (strategy, tactics and operations) is made.
  • An integrated solution method for order batching and picking
    Birger Raa (Ghent University)
    Co-authors: W. Dullaert
    Abstract:
    This paper wants to integrate (i) batching, (ii) sequencing, (iii) route construction and (iv) improvement methods to provide a comprehensive approach to operational decision making in multiple-block warehouses. Moreover, it makes a critical assessment of the impact of existing approaches on the overall cost of operating a warehouse.

Friday 15:10:00 Cycles in graphs
Room 138 - Chair: F. Spieksma
  • On detecting and enumerating chordless circuits in a digraph
    Raymond Bisdorff (University of Luxembourg)
    Abstract:
    We introduce and discuss an algorithm using O(n^5) time for detecting and enumerating all the chordless circuits of a digraph of order n. Running it on random digraph instances gives some insight into its practical performance.
  • Coloring Graphs to Avoid Monochromatic Cycles
    Fabrice Talla Nobibon (Katholieke Universiteit Leuven)
    Co-authors: C. Hurkens, R. Leus, FCR. Spieksma
    Abstract:
    We consider the problem of deciding whether a given directed graph can be vertex-partitioned into two acyclic subgraphs. Applications of this problem include testing rationality of collective consumption behavior, a subject in micro-economics. We present an exact algorithm called a branch-and-check algorithm, which is able to solve instances of considerable size within few seconds. We study empirically the transition from a high to a low probability of YES answer as function of some problem parameters. We identify classes of directed graphs for which the problem is easy and prove that the existence of a constant factor approximation algorithm is unlikely for an optimization version which maximizes the number of vertices that can be colored using two colors while avoiding monochromatic cycles.

 
 
  SOGESCI/ORBEL
QuantOM - HEC-Management School - University of Liège
Our partners: