Home | About | Journals | Submit | Contact Us | Français |

**|**HHS Author Manuscripts**|**PMC2888771

Formats

Article sections

Authors

Related links

J Math Biol. Author manuscript; available in PMC 2010 September 1.

Published in final edited form as:

Published online 2009 November 10. doi: 10.1007/s00285-009-0306-3

PMCID: PMC2888771

NIHMSID: NIHMS175089

Alexander V. Terekhov, Department of Kinesiology, The Pennsylvania State University, 039 Recreation Building, University Park, PA 16802, USA; Institut des Systèmes Intelligents et de Robotique, CNRS-UPMC, Pyramide ISIR, 4 Place Jussieu, 75005 Paris, France, Email: moc.liamg@vohkeretva;

The publisher's final edited version of this article is available at J Math Biol

See other articles in PMC that cite the published article.

We consider the problem of what is being optimized in human actions with respect to various aspects of human movements and different motor tasks. From the mathematical point of view this problem consists of finding an unknown objective function given the values at which it reaches its minimum. This problem is called the inverse optimization problem. Until now the main approach to this problems has been the cut-and-try method, which consists of introducing an objective function and checking how it reflects the experimental data. Using this approach, different objective functions have been proposed for the same motor action. In the current paper we focus on inverse optimization problems with additive objective functions and linear constraints. Such problems are typical in human movement science. The problem of muscle (or finger) force sharing is an example. For such problems we obtain sufficient conditions for uniqueness and propose a method for determining the objective functions. To illustrate our method we analyze the problem of force sharing among the fingers in a grasping task. We estimate the objective function from the experimental data and show that it can predict the force-sharing pattern for a vast range of external forces and torques applied to the grasped object. The resulting objective function is quadratic with essentially non-zero linear terms.

The human motor system is redundant with respect to common actions it performs. This means that there usually are numerous ways to achieve a particular motor goal (Bernstein 1967). At the same time, human motor actions usually are well-reproducible. They vary only slightly from trial to trial and from subject to subject. Such consistency may reflect the fact that humans try to perform their actions in most “comfortable” ways, optimizing performance in some sense.

The problem of what is being optimized in human actions was studied for the trajectory formation in reaching movement (Biess et al. 2007; Cruse et al. 1990; Engelbrecht 2001; Flash and Hogan 1985; Plamondon et al. 1993; Tsirakos et al. 1997), writing (Edelman and Flash 1987), walking (Pham et al. 2007); force sharing among the muscles (reviewed in Prilutsky 2000; Prilutsky and Zatsiorsky 2002) in gait (reviewed in Collins 1995), cycling (Prilutsky and Gregory 2000), jumping (Anderson and Pandy 1999), sit-to-stand movement (Kuzelicki et al. 2005), postural control (Kuo and Zajac 1993), and force sharing among fingers in grasping (Pataky et al. 2004).

This problem is usually called *the problem of inverse optimization* (Ahuja and Orlin 2001), in contrast to the more common *direct* optimization problem. The latter consists of finding values minimizing a given objective function. Unlike direct optimization problems, in the inverse optimization the objective function is unknown, while the values at which the objective function reaches its minimum are given. The inverse optimization problem is usually considered for a set of different constraints.

Though the inverse optimization problem has been addressed for a vast range of tasks, the approach rarely goes beyond the cut-and-try method. The common way to analyze it is based on an a priori chosen objective function, for which predictions are obtained and compared with experimental data. The choice of the objective function depends on physiological or psychological considerations and often on mathematical elegance. A reasonably good agreement of the prediction with experimental data is usually interpreted as a proof that the chosen objective function is the one that is optimized by the motor system.

The most extensive analysis of the problem has been done for the force sharing among the muscles serving one or several joints (reviewed in Erdemir et al. 2007; Prilutsky and Zatsiorsky 2002). In this case the optimization problem is to find muscle forces *F _{i}* such that, taken together, they exert desired moments of force

$$\begin{array}{c}\hfill J({F}_{1},\dots \phantom{\rule{thinmathspace}{0ex}},{F}_{n})\to \text{min},\text{}J:{\mathbb{R}}^{n}\to \mathbb{R},\hfill \\ \hfill {\displaystyle \sum _{i=1}^{n}{r}_{\mathit{\text{ij}}}{F}_{i}={M}_{j},\text{}j=1,\dots \phantom{\rule{thinmathspace}{0ex}},k,}\hfill \\ \hfill 0\le {F}_{i}\le {F}_{i}^{\text{max}},\text{}i=1,\dots \phantom{\rule{thinmathspace}{0ex}},n,\hfill \end{array}$$

where *k* is the number of joints and *n* is the number of muscles, acting on these joints, *n* > *k*, *r _{ij}* is the arm of the

The objective function *J* is usually of the form:

$$J\phantom{\rule{thinmathspace}{0ex}}({F}_{1},\dots \phantom{\rule{thinmathspace}{0ex}},{F}_{n})={\left({\displaystyle \sum _{i=1}^{n}{\left(\frac{{F}_{i}}{{F}_{i}^{*}}\right)}^{p}}\right)}^{1/p},$$

(1)

where ${F}_{i}^{*}$ are positive weights, which are usually either taken equal to each other or are normalized to some quantity (for example, the physiological cross section area or maximum force). The power *p* can be any positive number up to plus infinity, in which case the objective function transforms into:

$$J({F}_{1},\dots \phantom{\rule{thinmathspace}{0ex}},{F}_{n})=\underset{i=1,\dots \phantom{\rule{thinmathspace}{0ex}},n}{\text{max}}\left(\frac{{F}_{i}}{{F}_{i}^{*}}\right)\phantom{\rule{thinmathspace}{0ex}}.$$

Crowninshield and Brand (1981) suggested an approach for defining the parameters of the objective function (1), which they claimed to be physiologically relevant. In their model the value being minimized is inverse of the fatigue time averaged across muscles. They show that this value is proportional to normalized muscle force raised to the power *p* between 2.54 and 3.14. Normalization parameters ${F}_{i}^{*}$ are taken proportional to the physiological cross section area (PCSA).

The above function captures general features of muscle activation patterns in gait (Crowninshield and Brand 1981). It has been even suggested that in skilled movement the muscle forces are shared in a way to minimize this objective function (Prilutsky 2000). However, this hypothesis cannot explain experimentally observed co-activation of antagonist muscles (i.e. muscles making opposite contributions into the moment of force at the same joint) except for biarticular antagonists (Ait-Haddou et al. 2000; Herzog and Binding 1992).

It is unclear whether an optimization function can be unambiguously identified from experimental data. Collins (1995) has shown that the same experimental data can be explained reasonably well with significantly different objective functions. To some extent, this fact may be due to the imprecision of muscle force estimation from the electromyographic signal and uncertainties in the moment arms, which have been shown to influence the result of optimization significantly (Herzog 1992; Raikova and Prilutsky 2001; Redl et al. 2007).

These uncertainties are less dramatic in grasping tasks, where forces and points of their application can be directly measured (Zatsiorsky et al. 2002; Zatsiorsky and Latash 2008). When the grasp orientation was vertical and the subjects had to maintain the hand-held objects in the air at equilibrium resisting object weight and external torque an optimization was performed using as criteria the cubic norms of (a) finger forces, (b) finger forces normalized with respect to the maximal forces measured in single-finger tasks, (c) finger forces normalized with respect to the maximal forces measured in a four-finger task, and (d) finger forces normalized with respect to the maximal moments that can be generated by the fingers. All four criteria failed to predict antagonist finger moments (moments exerted by individual fingers that assisted rather than resisted external torques) at large external torques. Note that the above criteria did not take into consideration the finger interdependence (‘enslaving’). To account for the finger enslaving the vectors of “neural commands” were reconstructed from the finger forces using the enslaving matrix (Zatsiorsky et al. 2002). Optimization of the neural commands resulted in the best correspondence between actual and predicted finger forces; in particular the antagonist moments were predicted. However, when the grasp orientation was not vertical all the above mentioned objective functions explained the experimental data with approximately similar accuracy (Pataky et al. 2004).

In this paper we show that, even if ideally precise experimental data are given, the inverse optimization problem may have infinitely many solutions. Consider a simple mental experiment. Assume that the subject is instructed to exert a given total force with the four digits:

$$\begin{array}{c}\hfill {F}_{1}+{F}_{2}+{F}_{3}+{F}_{4}={F}_{\text{total}},\hfill \\ \hfill 0<{F}_{i}<{F}_{i}^{\text{max}}.\hfill \end{array}$$

(2)

In the experiment the total force is varied within some range. Assume that the subject performs the task perfectly and there are no errors in data recording. For each value of the total force the subject chooses a pattern of sharing the total force among the digits. Let us assume that the total force is shared equally among the fingers:

$${F}_{1}={F}_{\text{total}}/4,\text{}{F}_{2}={F}_{\text{total}}/4,\text{}{F}_{3}={F}_{\text{total}}/4,\text{}{F}_{4}={F}_{\text{total}}/4.$$

(3)

Now, assume that a researcher guesses an objective function whose optimization should lead to the observed experimental results:

$$J({F}_{1},{F}_{2},{F}_{3},{F}_{4})=\frac{1}{2}{\left({F}_{1}^{2}+{F}_{2}^{2}+{F}_{3}^{2}+{F}_{4}^{2}\right)}^{1/2}.$$

(4)

Indeed, one can verify that minimization of (4) subject to the constraints (2) has a unique solution (3). However one can also notice that (4) is by far not the only function with such properties. For example, the objective function

$$J({F}_{1},{F}_{2},{F}_{3},{F}_{4})={F}_{1}\xb7{F}_{2}\xb7{F}_{3}\xb7{F}_{4}$$

(5)

is as good in predicting experimental results (3) as the objective function (4). In fact, for every differentiable function *g* such that *g′* > 0 minimizing

$$J({F}_{1},{F}_{2},{F}_{3},{F}_{4})=g({F}_{1})+g({F}_{2})+g({F}_{3})+g({F}_{4})$$

(6)

Hence, for this particular example, there exist infinitely many different objective functions, which optimization, subject to (2), results in (3). At the same time, having more observations under other experimental conditions might limit the range of possible objective functions.

The main goal of this paper is to develop a method to determine an unknown objective function from a set of observations. To do that we obtain sufficient conditions that guarantee unique solution of the inverse optimization problem. We focus our analysis on the inverse optimization problem with additive objective function and linear constraints:

$$\begin{array}{c}\hfill J(x)={g}_{1}({x}_{1})+\cdots +{g}_{n}({x}_{n})\to \text{min},\hfill \\ \hfill \text{such that\hspace{1em}\hspace{1em}}Cx=b,\hfill \end{array}$$

(7)

where *x* is an *n*-dimensional vector, *g _{i}* unknown scalar functions,

This formalization is typical of a vast range of problems considered in the human movement science, in particular for various forms of the force sharing problem. A simpler version of this problem was analysed in Siemienski (2006) for the case when the functions *g _{i}* are proportional to each other and only one linear constraint is present.

The paper has the following structure. In the Preliminaries, we provide general definitions and statements we needed for the analysis of inverse optimization problems. In the Main Results, we focus on the problem (7), for which we prove the Uniqueness theorem. Then we consider two simple examples of the inverse optimization problem and illustrate how the Uniqueness theorem can be used to solve them. In the Applications, we analyse a “real-life” example of force sharing in grasping. We used our theoretical results to plan an experiment and to determine the objective function from the experimental data. The results are discussed in the Discussion section, followed by Appendix, which contains the proofs of the statements given in the paper.

Estimating the objective function from observations does not necessary lead to a unique solution. Indeed, there are some transformations of the objective function that do not influence the solution of the optimization problem. Among those are multiplication of the objective function by a positive number or adding an arbitrary number to the objective function. As a consequence, the inverse optimization problem can never be solved uniquely unless some additional information on the objective function is given. We call two objective functions *J*_{1} : *X* → and *J*_{2} : *X* → *essentially different on a subset X* ^{n} if there exist constraints such that the problems *J*_{1}, and *J*_{2}, have different solutions. Otherwise we call the objective functions *essentially similar on X*.

Optimization of essentially similar objective functions under any constraints leads to the same result. Therefore the inverse optimization problem can be solved up to the class of essentially similar functions only. It must be noted that the class of essentially similar objective functions is rather vast. For example, the objective functions *J* (*x*) and *f* (*J* (*x*)) are essentially similar for any strictly increasing function *f*.

In some cases, minimization of the objective function can be performed independently for some subsets of variables. This fact may limit the possibility of inverse optimization. Consider a simple example:

$$\begin{array}{rr}\hfill J({x}_{1},{x}_{2},{x}_{3},{x}_{4})& \hfill ={x}_{1}^{2}+{x}_{2}^{2}+{x}_{3}^{2}+{x}_{4}^{2}\to \text{min},\\ \hfill {x}_{1}+{x}_{2}& =a,\hfill \\ \hfill {x}_{3}+{x}_{4}& =b.\hfill \end{array}$$

It is evident, that *x*_{1}, *x*_{2}, *x*_{3}, *x*_{4} guarantee minimization of *J* → min subject to constraints if and only if *x*_{1}, *x*_{2} minimize ${J}_{1}={x}_{1}^{2}+{x}_{2}^{2}\to \text{min}$ subject to *x*_{1} + *x*_{2} = *a* and *x*_{3}, *x*_{4} minimize ${J}_{2}={x}_{3}^{2}+{x}_{4}^{2}\to \text{min}$ subject to *x*_{3} + *x*_{4} = *b*. In these two problems the functions *J*_{1} and *J*_{2} can be replaced with any essentially similar objective functions _{1} = _{1}(*x*_{1}, *x*_{2}) and _{2} = _{2}(*x*_{3}, *x*_{4}). Then all *x*_{1}, …, *x*_{4} that minimize $J={x}_{1}^{2}+{x}_{2}^{2}+{x}_{3}^{2}+{x}_{4}^{2}$ subject to constraints *x*_{1} + *x*_{2} = *a* and *x*_{3} + *x*_{4} = *b* will also minimize the objective function = _{1} + _{2} subject to the same constraints. However, the objective functions *J* and are essentially different. Thus, there are infinitely many essentially different objective functions, which are minimized by the same values *x*_{1}, …, *x*_{4} under constraints *x*_{1} + *x*_{2} = *a*, *x*_{3} + *x*_{4} = *b*. This fact may lead to non-uniqueness in solving the inverse optimization problem.

We now define a *splittable* optimization problem. To do that we first introduce the notion of groups of variables *independent* with respect to the optimization problem. Let the objective function *J* (*x*) be minimized subject to the constraints (*x*), where *x* = (*x*_{1}, …, *x _{n}*)

The variables *x*^{1} and *x*^{2} are said to be *independent* for the optimization problem *J*, if the solution of the problem $\langle {\tilde{J}}_{{\widehat{x}}^{2}}^{1},\tilde{\mathcal{C}}({x}^{1},{\widehat{x}}^{2})\rangle $ does not depend on *$\widehat{x}$*^{2} and similarly for the problem $\langle {\tilde{J}}_{{\widehat{x}}^{1}}^{2},\tilde{\mathcal{C}}({\widehat{x}}^{1},{x}^{2})\rangle $. In particular, if there is just one point *$\widehat{x}$*^{2} satisfying ^{2}, the variables *x*^{1} and *x*^{2} are independent.

We call the optimization problem *J*, *splittable* if it has independent variables. Consider the following example:

$$\begin{array}{c}\hfill J(x)={J}^{1}({x}^{1})+{J}^{2}({x}^{2})\to \text{min},\hfill \\ \hfill \mathit{\text{Cx}}=b,\text{}x\phantom{\rule{thinmathspace}{0ex}}\in \phantom{\rule{thinmathspace}{0ex}}X,\hfill \end{array}$$

(8)

where *x*^{1} and *x*^{2} are composed of components of *x* with indexes ^{1} and ^{2} respectively, *C* is a *k* × *n*-matrix (*k* < *n*), rank *C* = *k*, *b* is a *k*-dimensional vector.

The variables *x*^{1} and *x*^{2} are independent for the regarded optimization problem if and only if there is a matrix *D*, det *D* ≠ 0, such that in every row of the matrix *DC* all elements, either with indices ^{1} or ^{2}, equal zero. The proof of this statement is given in Appendix.

We call an objective function *additive* if it is essentially similar to an objective function that can be written as follows

$$J\phantom{\rule{thinmathspace}{0ex}}({x}_{1},\dots \phantom{\rule{thinmathspace}{0ex}},{x}_{n})={\displaystyle \sum _{i=1}^{n}{g}_{i}({x}_{i})}.$$

(9)

Assume that the additive objective function (9) is minimized subject to linear constraints:

$$\mathit{\text{Cx}}=b,$$

where *C* is a *k* × *n* matrix, *k* < *n*, rank *C* = *k*, *b* is a *k*-dimensional vector. Then the problem is splittable if and only if there is a *k* × *k*-matrix *D*, det *D* ≠ 0, such that by reordering the rows one can make the matrix *DC* block-diagonal.

This statement is a particular case of the last example (8), which is proven in Appendix as Lemma 2.

Thus, if an additive objective functions is minimized under linear constraints, then the question of whether the corresponding optimization problem is splittable depends only on the properties of the constraint matrix *C*. For this reason we call a full-ranked matrix *C splittable* if it satisfies the above mentioned conditions.

The matrix *C* is splittable if and only if one can make the matrix = *C ^{T}* (

The main goal of the current study is to find sufficient conditions for the uniqueness of solutions of an inverse optimization problem with additive objective function and linear constraints:

$$J(x)={\displaystyle \sum _{i=1}^{n}{g}_{i}({x}_{i})\to \text{min},}$$

(10)

$$\mathit{\text{Cx}}=b,\text{}x\phantom{\rule{thinmathspace}{0ex}}\in \phantom{\rule{thinmathspace}{0ex}}X\subset {\mathbb{R}}^{n},$$

(11)

where *C* is a *k* × *n* matrix, rank *C* = *k*, and *b* is a *k*-dimensional vector.

The formulas (10), (11) define a class of direct optimization problems parametrized with *b* *B*, where *B* is a domain of ^{k}. Here and on we assume that every direct optimization problem has a unique solution and that the solutions are known for all *b* *B*. The set of the solutions will be denoted by *X**.

Given the set *X**, the inverse optimization problem consists of finding functions *g _{i}*,

The optimization problem (10), (11) imposes some strong requirements on the functions *g _{i}* that come from the Lagrange minimum principle, which must hold for every

**Lemma 1** *If the functions g _{i}* (·)

$$\stackrel{\u02c7}{C}{g}^{\prime}(x)=0,\mathit{\text{\hspace{1em}for every}}\phantom{\rule{thinmathspace}{0ex}}x\phantom{\rule{thinmathspace}{0ex}}\in \phantom{\rule{thinmathspace}{0ex}}{X}^{*},$$

(12)

*where*

$$\stackrel{\u02c7}{C}=I-{C}^{T}{\left({\mathit{\text{CC}}}^{T}\right)}^{-1}C$$

(13)

*and* ${g}^{\prime}(x)={({g}_{1}^{\prime}({x}_{1}),\dots \phantom{\rule{thinmathspace}{0ex}},{g}_{n}^{\prime}({x}_{n}))}^{T}$ (*prime symbol denotes derivative*) *and I is the n* × *n unit matrix*.

The proof of Lemma 1 is given in Appendix. Using this lemma we will prove in Appendix the following statement, which is one of our main results.

**Theorem 1** *Assume that the inverse optimization problem* (10), (11) *with k* ≥ 2 *is non-splittable. If the functions g _{i}* (·)

$$\begin{array}{c}\hfill {g}_{i}({x}_{i})={\mathit{\text{rf}}}_{i}({x}_{i})+{q}_{i}{x}_{i}+{\text{const}}_{i},\mathit{\text{\hspace{1em}for every}}\phantom{\rule{thinmathspace}{0ex}}{x}_{i}\in {X}_{i}^{*},\hfill \\ \hfill {X}_{i}^{*}=\{s\phantom{\rule{thinmathspace}{0ex}}|\mathit{\text{\hspace{1em}there is}}\phantom{\rule{thinmathspace}{0ex}}x\in {X}^{*}:{x}_{i}=s\}\hfill \end{array}$$

*where the constants q _{i} satisfy the equation Čq* = 0,

This theorem provides sufficient conditions for existence and uniqueness, up to linear terms *q _{i} x_{i}*, of solutions of the inverse optimization problem. This means that if one could find such

$$\tilde{J}={\displaystyle \sum _{i=1}^{n}{f}_{i}({x}_{i})\to \text{min}}$$

subject to the constraints (11) for all *b* *B* results in the set *X**, then the desired objective function *J* is essentially similar to up to unknown linear terms *q _{i} x_{i}*.

The values *q _{i}* satisfy

$$q={\displaystyle \sum _{i=1}^{k}{c}_{i}^{T}{p}_{i}={C}^{T}p,}$$

(14)

where ${c}_{i}^{T}$ is the *i*th column of the matrix *C ^{T}*,

To illustrate why the inverse optimization problem (10), (11) can be solved only up to unknown *p _{i}*, write

$${q}^{T}x={\displaystyle \sum _{j=1}^{n}{q}_{j}{x}_{j}={\displaystyle \sum _{i=1}^{k}{c}_{i}^{T}{\mathit{\text{xp}}}_{i}={(\mathit{\text{Cx}})}^{T}p={b}^{T}p.}}$$

Thus, given the constraints (11) the expression *q ^{T} x* does not depend on

An important manifestation of Theorem 1 is that it provides unique solution of the inverse optimization problem. We first consider the case when *k* = *n* − 1. We call such inverse optimization problem *elementary*. In this case the rank of matrix *Č* is equal to one and, consequently, the vector equation *Č* *f′* (*x*) = 0 is equivalent to the following scalar equation:

$$\sum _{i=1}^{n}{a}_{i}{f}_{i}^{\prime}({x}_{i})=0,$$

where *a* = (*a*_{1}, …, *a _{n}*) is any row of the matrix

At the same time, *X** is an (*n* − 1)-dimensional smooth hypersurface in the *n*-dimensional space, which can be defined by a single scalar equation. Now the problem of the inverse optimization consists of finding a collection of functions *h*_{1}(*x*_{1}), …, *h _{n}*(

$$\sum _{i=1}^{n}{h}_{i}({x}_{i})=0$$

defines the hypersurface *X**. Indeed, if such functions are known then

$${f}_{i}({x}_{i})=\frac{1}{{a}_{i}}{\displaystyle \int {h}_{i}({x}_{i}){\mathit{\text{dx}}}_{i}}$$

and

$$J\phantom{\rule{thinmathspace}{0ex}}(x)=r{\displaystyle \sum _{i=1}^{n}\frac{1}{{a}_{i}}{\displaystyle \int {h}_{i}({x}_{i})\mathit{\text{dx}}}+{\displaystyle \sum _{i=1}^{n}{q}_{i}{x}_{i}.}}$$

Here *r* = 1 and the sign of *r* determines whether the objective function *J* reaches minimum or maximum on the given set *X**.

Now consider the case of an arbitrary *k*. Let *x*^{1} and *x*^{2} be two groups of variables such that *x*^{1} ^{k+1}. An optimal solution *x** of the problem (10), (11) corresponds to some values of these variables: *x*^{1}* and *x*^{2}*. Consider the following optimization problem, obtained from the initial one by adding the constraint *x*^{2} = *x*^{2}*:

$${J}_{{x}^{2*}}({x}_{1})=J\phantom{\rule{thinmathspace}{0ex}}({x}_{1},{x}^{2*})={\displaystyle \sum _{i\in {I}^{1}}{g}_{i}({x}_{i})+}{\displaystyle \sum _{i\in {I}^{2}}{g}_{i}({x}_{i}^{*})}\to \text{min},$$

(15)

$${C}^{1}{x}^{1}=b-{C}^{2}{x}^{2*}.$$

(16)

Here ^{1} and ^{2} are sets of indices of *x*, corresponding to *x*^{1} and *x*^{2}; the matrices *C*^{1} and *C*^{2} are composed of the columns of *C* with indices _{1} and _{2} respectively.

Evidently, if *x*^{1} is a solution of (15) then *x*^{1} = *x*^{1}*. One can define the set *X*^{1}* of all *x*^{1}*, which is the projection of *X** onto axes ^{1}.

The problem (15), (16) is defined for a (*k* + 1)-dimensional variable *x*^{1} subjected to *k* constraints, i.e. it is an elementary inverse optimization problem. If it is non-splittable, then the procedure described above is applicable and allows one to estimate functions *g _{i}* (

If the inverse optimization problem is splittable, it should be split until we reach a non-splittable subproblem. We thus proved that, if the initial problem (10), (11) is non-splittable, then for every *i* there exists a non-splittable elementary subproblem with at least two constraints.

It must be noted that to apply the proposed method the researcher must assume that the experimentally observed hypersurface is composed of solutions of the optimization problem with an additive objective function and known linear constraints. However, this assumption may not actually hold as for a given set of constraints there may be no additive objective function, which minimization results in the given hypersurface of solutions. In this case the method would result in a non-feasible objective function, which, for example, does not reach either its minimum or maximum on the observed surface, but instead has only hyperbolic points on it.

We illustrate our method by four simple inverse optimization problems.

Consider an inverse optimization problem with an additive objective function of three variables:

$$J({x}_{1},{x}_{2},{x}_{3})={g}_{1}({x}_{1})+{g}_{2}({x}_{2})+{g}_{3}({x}_{3})$$

(17)

subject to the constraints:

$$\begin{array}{cc}\hfill {x}_{1}+{x}_{2}+{x}_{3}& ={b}_{1},\hfill \\ \hfill {x}_{1}-{x}_{3}& ={b}_{2}.\hfill \end{array}$$

(18)

The solution of the direct optimization problem is known for a range of *b*_{1} and *b*_{2}:

$$\begin{array}{cc}{x}_{1}\hfill & =\frac{{b}_{1}}{3}+\frac{{b}_{2}}{2},\hfill \\ {x}_{2}\hfill & =\frac{1}{3}{b}_{1},\hfill \\ {x}_{3}\hfill & =\frac{{b}_{1}}{3}-\frac{{b}_{2}}{2}.\hfill \end{array}$$

(19)

We wish to determine the objective function (17) such that its minimization subject to the constraints (18) leads to the solution (19).

The constraints (18) are given by the matrix *C*:

$$C=\left(\begin{array}{ccc}1\hfill & 1\hfill & \hfill 1\\ 1\hfill & 0\hfill & -1\hfill \end{array}\right)\phantom{\rule{thinmathspace}{0ex}}.$$

(20)

Note that the matrix *C* is non-splittable. Indeed, the matrix *Č* = *I* − *C ^{T}* (

$$\stackrel{\u02c7}{C}=\left(\begin{array}{ccc}\hfill \frac{1}{6}& \hfill -\frac{1}{3}& \hfill \frac{1}{6}\\ \hfill -\frac{1}{3}& \hfill \frac{2}{3}& \hfill -\frac{1}{3}\\ \hfill \frac{1}{6}& \hfill -\frac{1}{3}& \hfill \frac{1}{6}\end{array}\right)$$

cannot be made block-diagonal by reordering the rows and columns with the same indices.

Now we wish to find twice continuously differentiable functions *f*_{1}, *f*_{2}, *f*_{3} satisfying the equation:

$$\stackrel{\u02c7}{C}\phantom{\rule{thinmathspace}{0ex}}\left(\begin{array}{c}\hfill {f}_{1}^{\prime}({x}_{1})\hfill \\ \hfill {f}_{2}^{\prime}({x}_{2})\hfill \\ \hfill {f}_{3}^{\prime}({x}_{3})\hfill \end{array}\right)=0$$

for every *x*_{1}, *x*_{2}, *x*_{3} from (19). Since the matrix has rank one, the latter is equivalent to the following scalar equation:

$${f}_{1}^{\prime}({x}_{1})-2{f}_{2}^{\prime}({x}_{2})+{f}_{3}^{\prime}({x}_{3})=0.$$

(21)

Solution (19) determines a plane in the 3-dimensional space:

$${x}_{1}-2{x}_{2}+{x}_{3}=0.$$

(22)

We should find any functions *f _{i}* satisfying (21) on the plane (22) and we can take

$$\begin{array}{c}\hfill {f}_{1}^{\prime}({x}_{1})={x}_{1},\hfill \\ \hfill {f}_{2}^{\prime}({x}_{2})={x}_{2},\hfill \\ \hfill {f}_{3}^{\prime}({x}_{3})={x}_{3}.\hfill \end{array}$$

Then, according to Theorem 1, the functions *g _{i}* are equal to:

$$\begin{array}{c}\hfill {g}_{1}({x}_{1})=\frac{r}{2}{x}_{1}^{2}+{q}_{1}{x}_{1}+{\text{const}}_{1},\hfill \\ \hfill {g}_{2}({x}_{2})=\frac{r}{2}{x}_{2}^{2}+{q}_{2}{x}_{2}+{\text{const}}_{2},\hfill \\ \hfill {g}_{3}({x}_{3})=\frac{r}{2}{x}_{3}^{2}+{q}_{3}{x}_{3}+{\text{const}}_{3},\hfill \end{array}$$

(23)

where *r* is an arbitrary non-zero scalar. The values *q _{i}* can be represented as

$$\left(\begin{array}{c}\hfill {q}_{1}\hfill \\ \hfill {q}_{2}\hfill \\ \hfill {q}_{3}\hfill \end{array}\right)\phantom{\rule{thinmathspace}{0ex}}=\phantom{\rule{thinmathspace}{0ex}}\left(\begin{array}{c}\hfill 1\hfill \\ \hfill 1\hfill \\ \hfill 1\hfill \end{array}\right)\phantom{\rule{thinmathspace}{0ex}}{p}_{1}+\phantom{\rule{thinmathspace}{0ex}}\left(\begin{array}{c}\hfill 1\\ \hfill 0\\ \hfill -1\end{array}\right)\phantom{\rule{thinmathspace}{0ex}}{p}_{2}.$$

Since determining the objective function can only be performed up to the class of essentially similar functions, the constants in (23) can be set zero. The parameter *r* can be taken equal ± 1. Since we want the resulting objective function to be minimized rather then maximized on (21), the parameter *r* must be positive.

The desired objective function is essentially similar to:

$$J\phantom{\rule{thinmathspace}{0ex}}({x}_{1},{x}_{2},{x}_{3})=\frac{{x}_{1}^{2}}{2}+\frac{{x}_{2}^{2}}{2}+\frac{{x}_{3}^{2}}{2}+({p}_{1}+{p}_{2}){x}_{1}+{p}_{1}{x}_{2}+({p}_{1}-{p}_{2}){x}_{3},$$

(24)

where *p*_{1} and *p*_{2} are arbitrary scalar numbers.

Consider an inverse optimization problem with an additive objective function of four variables:

$$J\phantom{\rule{thinmathspace}{0ex}}({x}_{1},{x}_{2},{x}_{3},{x}_{4})={g}_{1}({x}_{1})+{g}_{2}({x}_{2})+{g}_{3}({x}_{3})+{g}_{4}({x}_{4})$$

(25)

subject to the constraints:

$$\begin{array}{cc}\hfill {x}_{1}+{x}_{2}+{x}_{3}+{x}_{4}& ={b}_{1},\hfill \\ \hfill 2{x}_{1}+{x}_{2}+{x}_{4}& ={b}_{2},\hfill \\ \hfill {x}_{1}+{x}_{2}+{x}_{3}+2{x}_{4}& ={b}_{3}.\hfill \end{array}$$

(26)

Let the solutions of the direct optimization problems for a range of *b*_{1}, *b*_{2}, *b*_{3} lie within a 3-dimensional subspace with the normal vector *u* = (1, −2, 1, 0)^{T}.

The first step is to verify that the problem is non-splittable. The matrix *C* for the constraints (26) is

$$C=\left(\begin{array}{cccc}\hfill 1\hfill & \hfill 1\hfill & \hfill 1\hfill & \hfill 1\hfill \\ \hfill 2\hfill & \hfill 1\hfill & \hfill 0\hfill & \hfill 1\hfill \\ \hfill 1\hfill & \hfill 1\hfill & \hfill 1\hfill & \hfill 2\hfill \end{array}\right)\phantom{\rule{thinmathspace}{0ex}}.$$

(27)

The matrix *Č* = *I* − *C ^{T}* (

$$\stackrel{\u02c7}{C}=\left(\begin{array}{cccc}\hfill \frac{1}{6}& \hfill -\frac{1}{3}& \hfill \frac{1}{6}& \hfill 0\\ \hfill -\frac{1}{3}& \hfill \frac{2}{3}& \hfill -\frac{1}{3}& \hfill 0\\ \hfill \frac{1}{6}& \hfill -\frac{1}{3}& \hfill \frac{1}{6}& \hfill 0\\ \hfill 0& \hfill 0& \hfill 0& \hfill 0\end{array}\right)$$

and is block-diagonal, hence the matrix *C* is splittable.

The problem splits into two subproblems. The first one is:

$$\begin{array}{cc}\hfill \tilde{J}\phantom{\rule{thinmathspace}{0ex}}({x}_{1},{x}_{2},{x}_{3})& ={g}_{1}({x}_{1})+{g}_{2}({x}_{2})+{g}_{3}({x}_{3})\to \text{min},\hfill \\ \hfill \text{such that\hspace{1em}}{x}_{1}+{x}_{2}+{x}_{3}& =2{b}_{1}-{b}_{3},\hfill \\ \hfill {x}_{1}-{x}_{3}& ={b}_{2}-{b}_{1}.\hfill \end{array}$$

(28)

Its solution for a range of *b*_{1}, *b*_{2}, *b*_{3} lies in a plane in the 3-dimensional space, which can be obtained as the result of projecting the solution of the initial problem on the space of variables *x*_{1}, *x*_{2}, *x*_{3}. This plane passes through zero and is orthogonal to the vector *ũ* = (1, −2, 1)^{T}.

The second problem is vacuous, since *x*_{4} can be unambiguously determined from the constraints:

$${x}_{4}={b}_{3}-{b}_{1}$$

and, thus, *g*_{4} can be an arbitrary function.

In this case, the best one can do is to estimate the functions *g*_{1}, *g*_{2}, *g*_{3}. The constraints in (28) are defined by the matrix :

$$\tilde{C}=\left(\begin{array}{ccc}\hfill 1\hfill & \hfill 1\hfill & \hfill 1\\ \hfill 1\hfill & \hfill 0\hfill & \hfill -1\hfill \end{array}\right)\phantom{\rule{thinmathspace}{0ex}}.$$

(29)

The plane of the solutions of the direct problem (28) can be defined by the equation:

$${x}_{1}-2{x}_{2}+{x}_{3}=0.$$

(30)

The constraints matrix (29) and the surface of the solutions (30) coincide with those of the previous example. Thus, the objective function (*x*_{1}, *x*_{2}, *x*_{3}) is essentially similar to (24) and hence,

$$J({x}_{1},{x}_{2},{x}_{3},{x}_{4})=\frac{{x}_{1}^{2}}{2}+\frac{{x}_{2}^{2}}{2}+\frac{{x}_{3}^{2}}{2}+({p}_{1}+{p}_{2}){x}_{1}+{p}_{1}{x}_{2}+({p}_{1}-{p}_{2}){x}_{3}+{g}_{4}({x}_{4}),$$

where *p*_{1} and *p*_{2} are arbitrary numbers and *g*_{4} is an arbitrary scalar function.

In the previous examples the estimated objective function is quadratic reflecting the fact that the surface of solutions is planar. Here we illustrate how to use our method to analyze non-polynomial objective functions. Consider the inverse optimization problem from Example 1 with additive objective function (17) and linear constraints (18). Assume that the variation of parameters *b*_{1} and *b*_{2} results in a surface, defined by the equation:

$${x}_{1}-2{x}_{2}+5\phantom{\rule{thinmathspace}{0ex}}\text{cos}\phantom{\rule{thinmathspace}{0ex}}{x}_{3}=0.$$

(31)

According to Lemma 1 the functions *g _{i}* from (17) satisfy the equation

$${g}_{1}^{\prime}({x}_{1})-2{g}_{2}^{\prime}({x}_{2})+{g}_{3}^{\prime}({x}_{3})=0.$$

Then according to Theorem 1

$$\begin{array}{c}{g}_{1}^{\prime}({x}_{1})={\mathit{\text{rx}}}_{1}+{q}_{1},\hfill \\ {g}_{2}^{\prime}({x}_{2})={\mathit{\text{rx}}}_{2}+{q}_{2},\hfill \\ {g}_{3}^{\prime}({x}_{3})=5r\phantom{\rule{thinmathspace}{0ex}}\text{cos}\phantom{\rule{thinmathspace}{0ex}}{x}_{3}+{q}_{3},\hfill \end{array}$$

where *r* is a non-zero scalar whose sign determines weather the optimization problem consists in minimization or maximization of the objective function, *q _{i}* are arbitrary scalars satisfying the equation

The objective function *J* is essentially similar to the following:

$$J\phantom{\rule{thinmathspace}{0ex}}({x}_{1},{x}_{2},{x}_{3})=\frac{{x}_{1}^{2}}{2}+\frac{{x}_{2}^{2}}{2}+5\phantom{\rule{thinmathspace}{0ex}}\text{sin}\phantom{\rule{thinmathspace}{0ex}}{x}_{3}+({p}_{1}+{p}_{2}){x}_{1}+{p}_{1}{x}_{2}+({p}_{1}-{p}_{2}){x}_{3},$$

where *p*_{1} and *p*_{2} are arbitrary scalar numbers.

In the previous example the method resulted in estimation of the feasible objective functions. Here we give an example of a hypersurface not resulting in minimization of any additive objective function. Consider the inverse optimization problem from Example 1 with additive objective function (17) and linear constraints (18). Assume that the variation of parameters *b*_{1} and *b*_{2} results in a surface, defined by the equation:

$${x}_{1}+2{x}_{2}+{x}_{3}=0.$$

(32)

According to Lemma 1 the functions *g _{i}* from (17) satisfy the equation

$${g}_{1}^{\prime}({x}_{1})-2{g}_{2}^{\prime}({x}_{2})+{g}_{3}^{\prime}({x}_{3})=0.$$

Then according to Theorem 1

$$\begin{array}{c}{g}_{1}^{\prime}({x}_{1})={\mathit{\text{rx}}}_{1}+{q}_{1},\hfill \\ {g}_{2}^{\prime}({x}_{2})=-{\mathit{\text{rx}}}_{2}+{q}_{2},\hfill \\ {g}_{3}^{\prime}({x}_{3})={\mathit{\text{rx}}}_{3}+{q}_{3},\hfill \end{array}$$

where *r* is a non-zero scalar, *q _{i}* are arbitrary scalars satisfying the equation

The estimated objective function *J* is the following:

$$J({x}_{1},{x}_{2},{x}_{3})=\frac{r}{2}\left({x}_{1}^{2}-{x}_{2}^{2}+{x}_{3}^{2}\right)+({p}_{1}+{p}_{2}){x}_{1}+{p}_{1}{x}_{2}+({p}_{1}-{p}_{2}){x}_{3},$$

where *p*_{1} and *p*_{2} are arbitrary scalar numbers.

Evidently, that for any non-zero *r*, the estimated function does not reach it’s minimum subject to constraints (18) at any point of ^{3}. Thus, it proves falsity of the hypothesis that the hypersurface (32) results from minimization of an additive objective function subject to constraints (18).

To illustrate applicability of the approach presented above to “real-life” tasks we analyze the problem of force sharing among the digits in prismatic grasping when a subject holds a handle similarly to holding a glass with liquid. The points of application of the thumb and finger forces are assumed to lie in the grasp plane, which is parallel to the longitudinal handle axis (see Fig. 1). An external force *F ^{l}* parallel to the handle axis and an external torque

Schematic representation of the handle. Here *T* stands for the external torque, *F*^{l} for the load force, ${F}_{0}^{n}\phantom{\rule{thinmathspace}{0ex}}\text{and}\phantom{\rule{thinmathspace}{0ex}}{F}_{0}^{t}$ for the normal and tangential components of the thumb force, ${F}_{i}^{n}\phantom{\rule{thinmathspace}{0ex}}\text{and}\phantom{\rule{thinmathspace}{0ex}}{F}_{i}^{t}$ for the normal and tangential components of the finger forces (*i* = 1, **...**

In the planar case, the static equilibrium constraints include two equations on the forces and one equation on the moments of force. For a vertically oriented handle, the load force *F ^{l}* must be counterbalanced by the tangential forces of the fingers $({F}_{1}^{t},\dots \phantom{\rule{thinmathspace}{0ex}},{F}_{4}^{t})$ and the thumb $({F}_{0}^{t})$:

$${F}_{0}^{t}+{F}_{1}^{t}+{F}_{2}^{t}+{F}_{3}^{t}+{F}_{4}^{t}=-{F}^{l}.$$

(33)

The normal force of the thumb ${F}_{0}^{n}$ must be equal and opposite to the total normal force of the fingers:

$${F}_{0}^{n}={F}_{1}^{n}+{F}_{2}^{n}+{F}_{3}^{n}+{F}_{4}^{n}.$$

(34)

The joint moment of the normal and tangential forces must be equal and opposite to the external torque *T*.

$$-{d}_{1}{F}_{1}^{n}-{d}_{2}{F}_{2}^{n}+{d}_{3}{F}_{3}^{n}+{d}_{4}{F}_{4}^{n}+{r}_{0}{F}_{0}^{t}-{r}_{1}({F}_{1}^{t}+{F}_{2}^{t}+{F}_{3}^{t}+{F}_{4}^{t})=T.$$

(35)

The normal forces must be non-negative and cannot exceed their maximum values:

$$0\le {F}_{i}^{n}\le {F}_{i}^{n\phantom{\rule{thinmathspace}{0ex}}\text{max}},\text{}i=0,\dots \phantom{\rule{thinmathspace}{0ex}},4.$$

(36)

The tangential forces must stay below the maximum static friction force:

$$|{F}_{i}^{t}|\le {\mu}_{i}{F}_{i}^{n},\text{}i=0,\dots \phantom{\rule{thinmathspace}{0ex}},4,$$

(37)

where µ_{i} is the coefficient of the static Coulomb’s friction.

The static equilibrium imposes three equality-type constraints (33), (34), (35) on ten force variables: five normal and five tangential forces. Even though they must also satisfy fifteen inequalities (36), (37), in general, the problem is redundant.

In spite of the redundancy the force sharing among the fingers is quite reproducible among trials with fixed load force *F* and external torque *T* (Shim et al. 2003; Zatsiorsky et al. 2003). This is especially true for the normal forces. For zero external torque the normal forces ${F}_{i}^{n}$ are known to scale with the load force (Niu et al. 2007; Westling and Johansson 1984). It is reasonable to assume that a particular force sharing patterns result from minimization of a certain objective function of the normal and tangential forces.

We assume that the force distribution among fingers in grasping results from the minimization of the objective function *J* for given handle geometry and friction coefficients:

$$J={\displaystyle \sum _{i=0}^{4}{g}_{i}({F}_{i}^{n})+H\phantom{\rule{thinmathspace}{0ex}}({F}_{0}^{t},{F}_{1}^{t},\dots \phantom{\rule{thinmathspace}{0ex}},{F}_{4}^{t}),}$$

(38)

where *g _{i}* are scalar functions of normal forces and

Our goal here is to estimate the objective function involved into the sharing of the normal forces among the fingers, in particular, the functions *g _{i}* in (38). Indeed, the fact that the constraints are linear and the assumption that the objective function is additive with respect to the normal forces, make it possible to find the functions

$$\tilde{J}={\displaystyle \sum _{i=0}^{4}{g}_{i}({F}_{i}^{n})\to \text{min}}$$

(39)

subject to:

$$\begin{array}{cc}\hfill -{F}_{0}^{n}+{F}_{1}^{n}+{F}_{2}^{n}+{F}_{3}^{n}+{F}_{4}^{n}& =0,\hfill \\ \hfill -{d}_{1}{F}_{1}^{n}-{d}_{2}{F}_{2}^{n}+{d}_{3}{F}_{3}^{n}+{d}_{4}{F}_{4}^{n}& =T-{M}^{t},\hfill \end{array}$$

(40)

where *M ^{t}* stands for the total moment of the tangential forces. The Eq. (33) was omitted because it does not contain normal forces.

According to Theorem 1 one can find the objective function unambiguously, up to some linear terms if a *k*-dimensional surface of solutions is known, where *k* is the number of the (equality) constraints. In our case *k* = 2 and therefore a 2-dimensional surface of the optimal solutions is required.

Changes in the external torque effect only the second equation in (40), while the first one remains unchanged. The load force is not directly present in the constraints (40). Variation of the external torque at a constant load will result in a curve instead of a surface required to apply the method.

To overcome this difficulty we introduce an additional constraint by asking the subject to grip the handle with a given total grip force *F ^{g}*:

$${F}_{0}^{n}+{F}_{1}^{n}+{F}_{2}^{n}+{F}_{3}^{n}+{F}_{4}^{n}={F}^{g}.$$

(41)

The constraint (41) makes the problem splittable since from (40) and (41) it follows that ${F}_{0}^{n}={F}^{g}/2$. Splitting the initial problem produces two subproblems. The first one contains only normal force of the thumb and is vacuous. The second one includes the normal forces of the fingers:

$$\widehat{J}={\displaystyle \sum _{i=1}^{4}{g}_{i}({F}_{i}^{n})\to \text{min}}$$

(42)

subject to

$$\begin{array}{cc}\hfill {F}_{1}^{n}+{F}_{2}^{n}+{F}_{3}^{n}+{F}_{4}^{n}& ={F}^{g}/2,\hfill \\ \hfill -{d}_{1}{F}_{1}^{n}-{d}_{2}{F}_{2}^{n}+{d}_{3}{F}_{3}^{n}+{d}_{4}{F}_{4}^{n}& =T-{M}^{t}.\hfill \end{array}$$

(43)

The normal forces of the fingers must also satisfy the (inequality) constraints (36) and (37). However, as it is known from various experiments (Johansson and Westling 1984; Cole and Johansson 1993), normal forces are typically 30–50% above the slipping threshold. If the external load is not very large, neither normal nor tangential forces approach the borders of the domain defined by (36), (37) and therefore these constraints can be omitted. The constraint matrix *C* for the problem (42), (43) has the form:

$$C=\left(\begin{array}{cccc}\hfill 1& \hfill 1& \hfill 1& \hfill 1\\ \hfill -{d}_{1}& \hfill -{d}_{2}& \hfill {d}_{3}& \hfill {d}_{4}\end{array}\right)\phantom{\rule{thinmathspace}{0ex}}.$$

(44)

The problem (42), (43) has two linear constraints containing two parameters: the external torque *T* and the grip force *F ^{g}*, which can be varied independently in the experiment. Thus, we hope that the finger forces

We present results obtained for three subjects. They were right-handed young male adults with no history of hand injury (age 27.6±3.0 year, weight 74.7 ± 9.0 kg, height 176.3 ± 9.2 cm, hand length from the middle fingertip to the distal crease of the wrist with hand extended 18.4 ± 0.9 cm, hand width at the MCP level with hand extended 8.9 ± 0.7 cm).

In the experiment, the subjects held a handle mounted with five 6-dimensional force-torque sensors, whose surfaces were covered with sandpaper. The geometry of the handle is presented on Fig. 1. The top of the handle was equipped with an air-bubble level intended to help subjects to hold the handle vertically. A horizontal bar was attached to the handle at the bottom. Suspending various loads at different points along the bar allowed for varying both the load force *F ^{l}* and external torque

For every combination there were two types of trials, calibration and experimental, lasting 10 s each. In the calibration trials the subjects were instructed to hold the handle naturally trying not to grip it too hard. The total grip force was averaged over the 10-s period of the calibration trial. The averaged value was then used in the experimental trials, in which the subjects were instructed to make the gripping force equal 100, 125, 150 or 175% of that value. The subjects could see the current value of the total normal force of the five digits (gripping force) and the target value on the computer monitor located in front of the subject. Though the load force is not directly present in the Eq. (43), the subjects tended to grip harder in the calibration trials with the greater load force. Thus, by changing the load force we increased the range of the grip force.

On the whole, every subject performed 80 experimental trials. In every trial the average normal finger forces were computed over a 2-second interval where they exhibited the least variation. Following this procedure we obtained 80 points (one point per trial) in the 4-dimensional space of the normal finger forces for every subject. The thumb data are not presented since they are not relevant to the problem (42), (43).

It should be noted that the points of application of the normal finger forces varied across trials and conditions, which in turn resulted in variation of the constraints matrix *C*. Contrary to the case described above, where *C* was a constant matrix, we assume that the variations of the matrix *C* result in small changes of the optimal finger forces as compared to those caused by changing the external torque and the grip force. This assumption allows us to apply our method while keeping in mind that the experimental data points can be scattered around the ideal optimal surface. We used the following values of geometrical parameters of the handle:

$${d}_{1}=45\phantom{\rule{thinmathspace}{0ex}}\text{mm},\text{}{d}_{2}=15\phantom{\rule{thinmathspace}{0ex}}\text{mm},\text{}{d}_{3}=15\phantom{\rule{thinmathspace}{0ex}}\text{mm},\text{}{d}_{4}=45\phantom{\rule{thinmathspace}{0ex}}\text{mm}.$$

The influence of the varied factors (the external torque, the load force and the grip force) on the normal finger forces is illustrated on Fig. 2. One can see that the influence can be rather complex (especially Fig. 2b). However, as mentioned earlier, we expect the experimental values of the normal finger forces to lie on a surface in the 4-dimensional space. To verify this fact, for every subject we plotted 3-dimensional projections of the experimental data on the subspaces (*F*_{1}, *F*_{2}, *F*_{3}), (*F*_{2}, *F*_{3}, *F*_{4}), (*F*_{1}, *F*_{3}, *F*_{4}) and (*F*_{1}, *F*_{2}, *F*_{4}). In the projections the points tended to lie on planes, but were dispersed around them. The planarity of the data in the 4-dimensional space of the finger forces was quantified using the principal component analysis. It showed that 94.4 ± 0.4% of the total variance could be explained by 2 principal components, which define a plane in the 4-dimensional space.

An example of the normal forces of the index finger in different conditions. **a** The load force is fixed and equal to 12.5N, while the external torque and the total grip force are varied. The different symbols correspond to the values obtained for 100, **...**

Based on this observation, we made an assumption that the surface of the optimal solutions is a 2-dimensional plane, which can be approximately estimated from the experimental data. The fact that the data points did not ideally form a plane can be explained by variation of the points of the finger forces application, variability of the subject performance and instrumental noise. The plane can be defined by the vector equation:

$${\mathit{\text{AF}}}^{n}+b=0,$$

(45)

where ${F}^{n}={({F}_{1}^{n},{F}_{2}^{n},{F}_{3}^{n},{F}_{4}^{n})}^{T}$ is the vector of the normal finger forces, *A* is a full-ranked 2 × 4-matrix and *b* is a 2-dimensional vector. The matrix *A* is composed of transposed vectors of two lesser principle components. The vector *b* is defined as *b* = −*A ^{n}*, where

According to Theorem 1, if there are functions ${f}_{1}({F}_{1}^{n}),{f}_{2}({F}_{2}^{n}),{f}_{3}({F}_{3}^{n}),{f}_{4}({F}_{4}^{n})$ satisfying *Č f′* (*F ^{n}*) = 0 on the plane (45), then they coincide with

$${f}_{i}^{\prime}({F}_{i}^{n})={k}_{i}{F}_{i}^{n}+{w}_{i}$$

and *f _{i}*:

$${f}_{i}({F}_{i}^{n})=\frac{{k}_{i}}{2}{({F}_{i}^{n})}^{2}+{w}_{i}{F}_{i}^{n}.$$

(46)

Now, the inverse optimization problem consists of finding coefficients *k*_{1}, *k*_{2}, *k*_{3}, *k*_{4} and values *w*_{1}, *w*_{2}, *w*_{3}, *w*_{4} for which the plane *Č* (*K F ^{n}* +

For this reason we searched for the values *k _{i}* that minimize angle α between the planes

The vector *w* was chosen to have minimal length. It can be easily shown that such a vector is defined by the formula:

$$w=-\stackrel{\u02c7}{C}K{\overline{F}}^{n}.$$

Thus, we found functions *f _{i}* satisfying the equation

$${g}_{i}({x}_{i})={\mathit{\text{rf}}}_{i}({F}_{i}^{n})+{q}_{i}{F}_{i}^{n}+{\text{const}}_{i}$$

and thus

$${g}_{i}({x}_{i})=r\phantom{\rule{thinmathspace}{0ex}}\left(\frac{{k}_{i}}{2}{({F}_{i}^{n})}^{2}+{w}_{i}{F}_{i}^{n}\right)+{q}_{i}{F}_{i}^{n}+{\text{const}}_{i}.$$

(47)

Here *r* is a nonzero number and *q _{i}* are any numbers satisfying the equation

Since the objective function can be estimated only up to the class of essentially similar functions, one can assume const_{i} = 0 and *r* = 1. The sign of *r* is chosen in such a way that the resulting objective function corresponds to minimization problem.

The vector *q* can be represented as follows:

$$q={c}^{1}{p}_{1}+{c}^{2}{p}_{2},$$

where *p*_{1}, *p*_{2} are arbitrary numbers, *c*^{1} and *c*^{2} are columns of the transposed constraints matrix *C ^{T}*. Thus,

$${q}_{i}={p}_{1}+{d}_{i}{p}_{2}.$$

(48)

Substituting the values of *g _{i}* from (47) and

$$\widehat{J}=\frac{1}{2}{\displaystyle \sum _{i=1}^{4}{k}_{i}{\left({F}_{i}^{n}\right)}^{2}+}{\displaystyle \sum _{i=1}^{4}({w}_{i}+{p}_{1}+{d}_{i}{p}_{2}){F}_{i}^{n}.}$$

(49)

The values *k _{i}* and

The theorem of uniqueness requires the hypersurface of solutions to be known. In this example we dispose only limited set of data points, which, in addition, are subjected to noise. We idealize this data by assuming that it tends to lay on a hyperplane. However, it may happen that in ideal experiment the ideal hypersurface would be slightly different from the hyperplane and consequently, the real objective function would differ from the estimated one. Thus, the estimated objective function represents a quadratic approximation of real objective function. In general, this approximation is as good as the approximation of the experimental data with the hyperplane. To illustrate the quality of the estimated objective function we solved the direct optimization problem with the objective function (49) and the constraints (43). The values in the right side of the constraints equations (43) were computed from the experimental data. Since the solution of the direct optimization problem does not depend on parameters *p*_{1} and *p*_{2}, they were set to be zero. The average errors were computed for every finger as the average absolute difference between the experimentally observed value of the normal force and the one predicted by the optimization problem. The correlation between the predicted and experimental data and the average errors are presented in Fig. 3.

The correlation between the experimental data and the solution of the direct optimization problem for the normal finger forces. Each point corresponds to a particular combination of the external torque, the load force and the grip force. The errors were **...**

In this procedure we used the same set of data both to estimate the objective function and to illustrate its use in approximating the experimental results. To further validate our method and the assumptions on which it is based, e.g. that the objective function is additive and the plane is an acceptable approximation of the experimental data, we performed additional computational experiments in which estimation and validation sets of data were separated. For each subject we selected 60 random data points and used them to estimate the objective function. Then we solved the forward optimization problem with this objective function and compared the corresponding solutions with the remaining 20 data points. We performed this procedure 50 times for each subject and computed the average errors of the results of the forward optimization. We found that the average errors were only slightly larger (<20%) than those shown in Fig. 3 and did not exceed 0.6N; the coefficients *k _{i}* were always positive.

In this paper we analyse the problem of inverse optimization, which consists of finding an unknown objective function given the values, at which the function reaches its minimum, for a set of different constraints. This problem often arises when the principles of control of human movements are studied (Engelbrecht 2001). Nowadays, the inverse optimization problem is usually approached with the cut-and-try method. As a consequence, a number of different objective functions have been proposed to explain the control of the same motor task (Collins 1995; Cruse et al. 1990; Pataky et al. 2004).

The attempts to approach the inverse optimization problems more systematically are rather rare. Some theoretical results were obtained in the problem of the correction of a known objective function in linear programming and the theory of combinatorial optimization (Ahuja and Orlin 2001). Recently Bottasso et al. (2006) proposed a systematic approach to identifying unknown parameters of the objective function taking into account possible inaccuracy of the experimental data. Siemienski (2006) proposed an approach to non-parametric identification of an unknown objective function from experimental data for a specific class of additive objective functions, which are minimized subject to a single linear constraint. Siemienski was among the first to emphasize that the inverse optimization problem should be regarded for a set of different constraints.

The purpose of the current study is twofold: to develop a method for non-parametric identification of the objective function in the inverse optimization problem and to obtain sufficient conditions on the set of constraints (and the objective functions themselves) that would guarantee the uniqueness of the solution of the inverse optimization problem. We focus our analysis on the class of the inverse optimization problems with additive objective functions and linear constraints. This class includes various aspects of the force sharing problem, which is one of the most common inverse optimization problems in the science of human motions.

We show that for any value, which minimizes the additive objective function, almost unique identification of the function can be performed. The dimension of the space of such values must be equal to the dimension of the constraints in the problem. From the practical point of view this means that, in order to solve the inverse optimization problem, one should be able to vary independently the values of every constraint. We note that this condition was not met in most studies where an objective function was proposed to optimize various motor tasks. We believe that one of the reasons for a variety of different objective functions in the case of force sharing is insufficient amount of experimental conditions that were used to determine the objective function. The conditions of the Uniqueness theorem proved here can be used to plan the experiments directed to identifying the objective function.

Moreover, we use the Uniqueness theorem to propose a method of solving the inverse optimization problem. To determine the additive objective function of *n* variables minimized subject to *k* constraints one can find any *n* − *k* independent additive functions, which equal zero on the *k*-dimensional hypersurface of the experimental data. Of course, in reality one has only a finite number of experimental observations and, therefore, these data should be used to determine the idealized hypersurface. The latter cannot be unambiguous since it represents an attempt to estimate non-parametrized function from limited set of noisy data. Roughly speaking, our method provides as accurate estimate of the objective function as is the idealization of the hypersurface.

In this study we restrict our analysis to the problems with equality constraints only. Usually “real-life” problems have both equality and inequality constraints. The analyzed problem of the finger force sharing in grasping can be considered as an example. Indeed, the normal forces of the fingers cannot be negative or exceed their maximal values while the magnitudes of tangential forces cannot exceed the magnitudes of the normal forces multiplied by the friction coefficient. Nevertheless, the developed method can be applied to this problem because the inequality constraints are “passive”, which means that all experimental points are inside the set defined by the inequality constraints and not on the boundary of this set. In the context of the inverse optimization problem, the passive inequality constraints can be ignored. On the contrary, if all the data points are on the boundary of the constraints, such constraints become “active” and should be treated the same way as any other equality constraints. The case when for some subset of data the inequality constraints are active and for the other they are passive cannot be approached with the proposed method.

Developing our method we assumed that the given experimental data result from the minimization of an additive objective function subject to the known linear constraints. This assumption keeps out of the scope the question of the existence of solutions of the inverse optimization problem. In practical applications the researcher can rarely know for sure that the observed experimental data indeed corresponds to a solution of an optimization problem. In this case he/she can assume the latter and apply the method keeping in mind that the estimated function must be verified to ensure that it really reaches its minima on the observed data.

We illustrate the applicability of our method by analysing a “real-life” example of the force sharing problem in grasping. We demonstrate how the conditions of the Uniqueness theorem can help planning the experiment. We found that, in order to estimate the objective function used for the normal finger forces distribution, it is necessary to vary the external torque applied to the handle and the total grip force of the fingers. We would like to note that in most previous attempts to estimate this objective function the external torque and the load force were varied instead.

We use our method to estimate the objective function from the experimental data. The resulting objective function is quadratic with nonzero linear terms. Polynomial and, particularly, quadratic objective functions have been proposed for force sharing problem, however they did not include linear terms. It must be emphasized here that the fact that the estimated objective function is quadratic follows from the planarity of the surface of experimental data. Moreover, the presence of linear members is a consequence of the fact that the experimental plane does not contain the origin of the reference frame. Since the experimental data was limited and noisy the real objective function may be different from the estimated one. Thus, the result should be treated as a quadratic approximation of the real objective function. In order to illustrate the precision of this approximation we solve the direct optimization problem and compare the solutions with the experimental data. The ability of the objective function to explain the experimental data is illustrated by Fig. 3. In addition, we estimated the objective function on randomly selected subset of data and then validated the estimated function on the remaining data. It appeared that the average performance on the new data was comparable to the one when the same data set was used both for estimation and validation. The latter confirms that at least in the regarded example, the method performs rather robustly and the estimated objective function can be used, of course, with limitations, to predict new experimental data.

We believe that the method we propose can be helpful in analysis of principles underlying the control of human movements. It can be applied to a vast range of problems, especially to various forms of the problem of force sharing. This method provides a mathematical tool for an almost unambiguous identification of the objective function from experimental data, however the question of its interpretation still remains unanswered.

The study was supported in part by NIH grants AR-048563, AG-018751, NS-35032 and NSF grant 0754911. The authors would like to thank James Metzler for the technical support and the reviewers for valuable comments that helped us substantially improve the quality of the manuscript. Our special thanks go to an anonymous journal reviewer who provided Example 4.

*x*- An independent variable
*J*- An objective function of an optimization problem
- Constraints for an optimization problem
*J*,- An optimization problem with the objective function
*J*and the constrains *f*(·),_{i}*g*(·)_{i}- Scalar functions
*C*,*b*- A matrix and a vector of the linear constraints
*Cx*=*b* *a*_{i}- A scalar value
- A set of indexes

We present here the proofs of the theorems and lemmas formulated in the text.

**Lemma 2** *Consider the following optimization problem*:

$$J(x)={J}^{1}({x}^{1})+{J}^{2}({x}^{2})\to \text{min}$$

*such that*

$$\mathit{\text{Cx}}=b,\text{}x\phantom{\rule{thinmathspace}{0ex}}\in \phantom{\rule{thinmathspace}{0ex}}X,$$

(50)

*where the groups of variables x*^{1} *and x*^{2} *are composed of components of x with indexes* ^{1} *and* ^{2} *respectively, C is a k* × *n-matrix* (*k* < *n*), *rank C* = *k, b is a k-dimensional vector*.

*Then the groups of variables x*^{1} *and x*^{2} *are independent for the corresponding optimization problem if and only if there is a matrix D*, det *D* ≠ 0, *such that in every row of the matrix DC all elements with either indexes* ^{1} *or* ^{2} *are equal to zero*.

*Proof* For simplicity we assume that *x*^{1} corresponds to the first *m* components of *x* and *x*^{2} to the remaining *n* − *m* components.

Suppose there is a matrix *D* such that:

$$\mathit{\text{DC}}=\left(\begin{array}{cc}\hfill {A}^{1}\hfill & \hfill 0\hfill \\ \hfill 0\hfill & \hfill {A}^{2}\hfill \end{array}\right)\phantom{\rule{thinmathspace}{0ex}},\mathit{\text{\hspace{1em}\hspace{1em}Db}}=\left(\begin{array}{c}\hfill {a}^{1}\hfill \\ \hfill {a}^{2}\hfill \end{array}\right)\phantom{\rule{thinmathspace}{0ex}}.$$

Then the constraint (50) splits into two:

$${A}^{1}{x}^{1}={a}^{1},\text{}{A}^{2}{x}^{2}={a}^{2}.$$

Thus, the set of all *x*^{1} satisfying (50) does not depend on *x*^{2} and vice versa.

Consider a set of objective functions ${J}_{{\widehat{x}}^{2}}^{1}={J}^{1}({x}^{1})+{J}^{2}({\widehat{x}}^{2})$ parametrized by *$\widehat{x}$*^{2}. All these objective functions are essentially similar and thus are minimized by the same value *x*^{1}* under all possible constraints. The same holds for ${J}_{{\widehat{x}}^{1}}^{2}$. Since in addition to that the constraints for *x*^{1} do not depend on *$\widehat{x}$*^{2} and vice versa the groups of variables *x*^{1} and *x*^{2} are independent.

Now, assume that *x*^{1} and *x*^{2} are independent. The objective functions ${J}_{{\widehat{x}}^{2}}^{1}$ are essentially similar for all *$\widehat{x}$*^{2}. The same is true for ${J}_{{\widehat{x}}^{1}}^{2}$. Thus, the groups of variables *x*^{1} and *x*^{2} can be independent only if the constraints on *x*^{1} and *x*^{2} are independent, i.e. the set of all *x*^{1} satisfying (50) does not depend on *$\widehat{x}$*^{2} and the same for *x*^{2}.

Let *C*^{1} be the matrix comprised of the first *m* columns of *C* and *C*^{2} be the matrix comprised of the remaining *n* − *m* columns. The Eq. (50) can be rewritten as

$${C}^{1}{x}^{1}+{C}^{2}{x}^{2}=b.$$

(51)

The Eq. (51) holds for every *x*^{1} *S*^{1} ∩ *X* and *x*^{2} *S*^{2} ∩ *X*, where *S*^{1} and *S*^{2} are some affine subspaces of dimensions *k*^{1} and *k*^{2} respectively. Since *X* is an open domain in ^{n} and the matrix *C* has rank *k*, we have *k*^{1} + *k*^{2} = *k*.

Consider the linear space of all rows *d*^{2} such that *d*^{2}*C*^{1} = 0. Since the matrix *C*^{1} has rank *k*_{1}, the dimension of this space is *k*^{2}. Hence there exist a *k*^{2} × *k*-matrix *D*^{2}, rank *D*^{2} = *k*^{2}, such that *D*^{2}*C*^{1} = 0. Similarly, there exists a *k*^{1} × *k*-matrix *D*^{1}, rank *D*^{1} = *k*^{1}, such that *D*^{1}*C*^{2} = 0. The matrix

$$D=\left(\begin{array}{c}\hfill {D}^{1}\hfill \\ \hfill {D}^{2}\hfill \end{array}\right)$$

has nonzero determinant since the matrix *C* has full rank. One can see that the matrix *DC* is block diagonal.

*Proof of Lemma 1* Fix *b* in (11) and consider the corresponding direct optimization problem. Since the objective function *J* is differentiable it satisfies the Lagrange minimum principle. The Lagrange function is

$$\mathcal{L}={\lambda}_{0}J\phantom{\rule{thinmathspace}{0ex}}(x)+(\mathit{\text{Cx}}-b,\lambda )$$

(52)

where (·, ·) denotes the scalar product, λ = (λ_{1}, …, λ_{k})^{T} is a *k*-dimensional vector, λ_{0} ≥ 0 is a number. It can be easily shown that λ_{0} is strictly positive and, thus, we can assume it to be one.

According to the Lagrange principle, if *x* is a solution of the problem (10), (11) then there is a non-zero vector λ such that *x* minimizes . Since is smooth, it means that

$$\frac{\partial \mathcal{L}}{\partial {x}_{i}}(x)={g}_{i}^{\prime}({x}_{i})+{C}_{i}^{T}\lambda =0,\text{\hspace{1em}\hspace{1em}for}\phantom{\rule{thinmathspace}{0ex}}i=1,\dots \phantom{\rule{thinmathspace}{0ex}},n,$$

where *C _{i}* is the

In the vector form these equations can be written as

$${g}^{\prime}(x)+{C}^{T}\lambda =0.$$

(53)

Excluding λ from (53) leads to (12), (13). Since this reasoning holds for all *b* *B*, it also holds for all *x* *X**.

*Proof of Theorem 1* I. The case of an elementary optimization problem, *k* = *n* − 1.

In this case the matrix *Č* has rank one and therefore the equation *Č g′*(*x*) = 0 is equivalent to the following scalar equation

$${a}_{1}{g}_{1}^{\prime}({x}_{1})+\cdots +{a}_{n}{g}_{n}^{\prime}({x}_{n})=0,$$

(54)

where *a* = (*a*_{1}, …, *a _{n}*) is any row of

The coefficients *a _{i}* are non-zero. Indeed, if any of them is zero then the matrix

Let us now prove that if the functions ${f}_{i}^{\prime}$ satisfy (54) on *X** then they coincide with ${g}_{i}^{\prime}$ up to a constant.

Using the Taylor decomposition in a vicinity of a point *x* *X** we obtain that

$$\begin{array}{c}\hfill {a}_{1}{g}_{1}^{\u2033}({x}_{1}){\mathit{\text{dx}}}_{1}+\cdots +{a}_{n}{g}_{n}^{\u2033}({x}_{n}){\mathit{\text{dx}}}_{n}=0,\hfill \\ \hfill {a}_{1}{f}_{1}^{\u2033}({x}_{1}){\mathit{\text{dx}}}_{1}+\cdots +{a}_{n}{f}_{n}^{\u2033}({x}_{n}){\mathit{\text{dx}}}_{n}=0,\hfill \end{array}$$

which holds for every vector *dx* = (*dx*_{1}, …, *dx _{n}*)

$${a}_{i}{f}_{i}^{\u2033}({x}_{i})=r(x){a}_{i}{g}_{i}^{\u2033}({x}_{i}),\text{\hspace{1em}for}\phantom{\rule{thinmathspace}{0ex}}i=1,\dots \phantom{\rule{thinmathspace}{0ex}},n.$$

(55)

We shall show that *r* (*x*) does not depend on *x*. To this end we express the variable *x*_{1} as a function of other variables on the hypersurface *X**:

$${x}_{1}={h}_{1}({x}_{2},\dots \phantom{\rule{thinmathspace}{0ex}},{x}_{n})$$

and transfer the Eq. (55) into:

$${f}_{i}^{\u2033}({x}_{i})={\tilde{r}}_{1}({x}_{2},\dots \phantom{\rule{thinmathspace}{0ex}},{x}_{n}){g}_{i}^{\u2033}({x}_{i}),\text{}i=1,\dots \phantom{\rule{thinmathspace}{0ex}},n,$$

(56)

where _{1}(*x*_{2}, …, *x _{n}*) =

The Eq. (56) holds for all *x*_{2}, …, *x _{n}*. Since

Integrating (55) twice leads to

$${f}_{i}({x}_{i})=\mathit{\text{rg}}({x}_{i})+{q}_{i}{x}_{i}+{\text{const}}_{i},$$

(57)

where *q _{i}* must satisfy

$${a}_{1}{q}_{1}+\cdots +{a}_{n}{q}_{n}=0.$$

II. The general case, 2 ≤ *k* < *n* − 1.

As it was noted above, for arbitrary *n* solving the problem of inverse optimization can be reduced to solving a number of elementary subproblems. Thus, to prove the theorem in the general case it suffices to show that for every *i* there exists a non-splittable elementary subproblem containing *g _{i}* and that the coefficient

First we show that if *C* is a non-splittable *k* × *n*-matrix then for every column *c* of the matrix *C* there exists a non-splittable *s* × (*s* + 1)-minor , rank = *s*, *s* ≥ 2 resting on *c*.

Assume by contradiction that the statement does not hold for some column *c* of *C*, that is every *s* × (*s* + 1)-minor, *s* ≥ 2, of *C* resting on *c* is splittable. We prove that in this case every *s* × *m*-minor, *m* ≥ *s* + 1, *s* ≥ 2, of *C* resting on *c* is splittable. We shall use induction over *s* and *m*.

The proof is evident for *s* = 2 and arbitrary *m*. We prove that if this holds for all *s′* × *m′*-minors, where *s′* ≤ *s* and *m′* ≥ *s′* + 1, and it holds for (*s* + 1) × *m*-minors then it also holds for (*s* + 1) × (*m* + 1)-minors. Since the property holds for every *s* × (*s* + 1)-minor the latter would prove it for all *s* × *m*-minors with *m* ≥ *s* + 1.

Let *B* be an arbitrary *s* × (*m*+1)-minor of *C* resting on the column *c*. It is splittable according to the induction assumption and thus there exists a matrix *D* such that:

$$\mathit{\text{DB}}=\left(\begin{array}{cc}\hfill {D}^{11}\hfill & \hfill {D}^{12}\hfill \\ \hfill {D}^{21}\hfill & \hfill {D}^{22}\hfill \end{array}\right)\phantom{\rule{thinmathspace}{0ex}}B=\left(\begin{array}{ccc}\hfill {B}^{11}\hfill & \hfill 0\hfill & \hfill 0\hfill \\ \hfill 0\hfill & \hfill {B}^{22}\hfill & \hfill {b}^{23}\hfill \end{array}\right)\phantom{\rule{thinmathspace}{0ex}}.$$

Here *D*^{11} is a *s*^{1} × *s*^{1}-matrix, and respectively, *D*^{22} is *s*^{2} × *s*^{2}, *B*^{11} is *s*^{1} × *m*^{1}, *B*^{22} is *s*^{2} × *m*^{2}, *b*^{23} is *s*^{2} × 1.

Let *B′* be a (*s* + 1) × (*m* + 1)-minor such that its first *s* rows coincide with those of the minor *B*. Then

$${D}^{\prime}{B}^{\prime}=\left(\begin{array}{ccc}\hfill {D}^{11}\hfill & \hfill {D}^{12}\hfill & \hfill 0\hfill \\ \hfill {D}^{21}\hfill & \hfill {D}^{22}\hfill & \hfill 0\hfill \\ \hfill 0\hfill & \hfill 0\hfill & \hfill 1\hfill \end{array}\right)\phantom{\rule{thinmathspace}{0ex}}{B}^{\prime}=\left(\begin{array}{ccc}\hfill {B}^{11}\hfill & \hfill 0\hfill & \hfill 0\hfill \\ \hfill 0\hfill & \hfill {B}^{22}\hfill & \hfill {b}^{23}\hfill \\ \hfill {b}^{31}\hfill & \hfill {b}^{32}\hfill & \hfill {b}^{33}\hfill \end{array}\right)\phantom{\rule{thinmathspace}{0ex}}.$$

Here *b*^{31} is 1 × *s*^{1}, *b*^{32} is 1 × *s*^{2}, *b*^{33} is scalar.

Consider (*s*+1) × *m*-minor *B″* composed of all but the last column of *B′*. The matrix *D′ B″* is splittable and hence, either there exists a row *d*^{31} such that *d*^{31}*B*^{11} + *b*^{31} = 0 or there exists a row *d*^{32} such that *d*^{32} *B*^{22} + *b*^{32} = 0. If the former is true, then *B′* is obviously splittable. In particular, it is the case when *B*^{11} is a column. Assume the latter is true and *B*^{11} consists of at least two columns. Then

$${D}^{\u2033}{B}^{\prime}=\left(\begin{array}{ccc}\hfill {D}^{11}\hfill & \hfill {D}^{12}\hfill & \hfill 0\hfill \\ \hfill {D}^{21}\hfill & \hfill {D}^{22}\hfill & \hfill 0\hfill \\ \hfill {d}^{32}{D}^{21}\hfill & \hfill {d}^{32}{D}^{22}\hfill & \hfill 1\hfill \end{array}\right)\phantom{\rule{thinmathspace}{0ex}}{B}^{\prime}=\left(\begin{array}{ccc}\hfill {B}^{11}\hfill & \hfill 0\hfill & \hfill 0\hfill \\ \hfill 0\hfill & \hfill {B}^{22}\hfill & \hfill {b}^{23}\hfill \\ \hfill {b}^{31}\hfill & \hfill 0\hfill & \hfill {d}^{32}{b}^{23}+{b}^{33}\hfill \end{array}\right)\phantom{\rule{thinmathspace}{0ex}}.$$

Consider an (*s* + 1) × *m*-minor *B* composed of all but the first column of *B′*. Since *B′* is splittable, there exists a row *d*^{33} such that *d*^{33} *B*^{22} = 0 and *d*^{33} *b*^{23} + *d*^{32} *b*^{23} + *b*^{33} = 0. Thus,

$${D}^{\u2034}{B}^{\prime}=\left(\begin{array}{ccc}\hfill {D}^{11}\hfill & \hfill {D}^{12}\hfill & \hfill 0\hfill \\ \hfill {D}^{21}\hfill & \hfill {D}^{22}\hfill & \hfill 0\hfill \\ \hfill ({d}^{32}+{d}^{33}){D}^{21}\hfill & \hfill ({d}^{32}+{d}^{33}){D}^{22}\hfill & \hfill 1\hfill \end{array}\right)\phantom{\rule{thinmathspace}{0ex}}{B}^{\prime}=\left(\begin{array}{ccc}\hfill {B}^{11}\hfill & \hfill 0\hfill & \hfill 0\hfill \\ \hfill 0\hfill & \hfill {B}^{22}\hfill & \hfill {b}^{23}\hfill \\ \hfill {b}^{31}\hfill & \hfill 0\hfill & \hfill 0\hfill \end{array}\right)\phantom{\rule{thinmathspace}{0ex}},$$

which proves that *B′* is splittable.

We proved that any *s* × *m* minor of *C* resting on *c* is splittable. In particular, itmeans that the matrix *C* is splittable contradicting to the assumption of the theorem. Hence for every column *c* of the matrix *C* there exists a non-splittable *s* × (*s* + 1)-minor * s* ≥ 2.

It can be proved that for every two rows *c*^{1} and *c*^{2} of the matrix *C* there exists a non-splittable *s* × (*s* + 1)-minor , *s* ≥ 2, of the matrix *C*. The proof of this fact is similar to the previous one and is omitted.

Now, we prove the theorem for the general case. Assume there are functions *f _{i}* (

Consider any function *g _{i}*. There exists a non-splittable

It can be shown that the functions *f _{i}* satisfy the equation $\stackrel{\u02c7}{\tilde{C}}\tilde{f}\prime =0$, where

The same procedure can be performed for all sets of indexes. The scalar *r* in (57) is the same for all *i* = 1, …, *n* since for every *i*_{1} and *i*_{2} there exists an elementary subproblem, which contains both *g*_{i1} and *g*_{i2}. Obviously, the constants *q _{i}* must satisfy

Alexander V. Terekhov, Department of Kinesiology, The Pennsylvania State University, 039 Recreation Building, University Park, PA 16802, USA. Institut des Systèmes Intelligents et de Robotique, CNRS-UPMC, Pyramide ISIR, 4 Place Jussieu, 75005 Paris, France, Email: moc.liamg@vohkeretva.

Yakov B. Pesin, Department of Mathematics, The Pennsylvania State University, University Park, PA 16802, USA.

Xun Niu, Department of Kinesiology, The Pennsylvania State University, 039 Recreation Building, University Park, PA 16802, USA.

Mark L. Latash, Department of Kinesiology, The Pennsylvania State University, 039 Recreation Building, University Park, PA 16802, USA.

Vladimir M. Zatsiorsky, Department of Kinesiology, The Pennsylvania State University, 039 Recreation Building, University Park, PA 16802, USA, Email: ude.usp@1zxv.

- Ahuja RK, Orlin JB. Inverse optimization. Oper Res. 2001;49(5):771–783.
- Ait-Haddou R, Binding P, Herzog W. Theoretical considerations on cocontraction of sets of agonistic and antagonistic muscles. J Biomech. 2000;33(9):1105–1111. [PubMed]
- Anderson FC, Pandy MG. Adynamic optimization solution for vertical jumping in three dimensions. Comput Methods Biomech Biomed Eng. 1999;2(3):201–231. [PubMed]
- Bernstein NA. The coordination and regulation of movements. Oxford: Pergamon; 1967.
- Biess A, Liebermann DG, Flash T. A computational model for redundant human three-dimensional pointing movements: integration of independent spatial and temporal motor plans simplifies movement dynamics. J Neurosci. 2007;27(48):13045–13064. [PubMed]
- Bottasso CL, Prilutsky BI, Croce A, Imberti E, Sartirana S. A numerical procedure for inferring from experimental data the optimization cost functions using a multibody model of the neuro-musculoskeletal system. Multibody Syst Dynam. 2006;16:123–154.
- Cole KJ, Johansson RS. Friction at the digit-object interface scales the sensorimotor transformation for grip responses to pulling loads. Exp Brain Res. 1993;95(3):523–532. [PubMed]
- Collins JJ. The redundant nature of locomotor optimization laws. J Biomech. 1995;28(3):251–267. [PubMed]
- Crowninshield RD, Brand RA. A physiologically based criterion of muscle force prediction in locomotion. J Biomech. 1981;14(11):793–801. [PubMed]
- Cruse H, Wischmeyer E, Brwer M, Brockfeld P, Dress A. On the cost functions for the control of the human arm movement. Biol Cybern. 1990;62(6):519–528. [PubMed]
- Edelman S, Flash T. A model of handwriting. Biol Cybern. 1987;57(1–2):25–36. [PubMed]
- Engelbrecht S. Minimum principles in motor control. J Math Psychol. 2001;45(3):497–542. [PubMed]
- Erdemir A, McLean S, Herzog W, van den Bogert AJ. Model-based estimation of muscle forces exerted during movements. Clin Biomech (Bristol, Avon) 2007;22(2):131–154. [PubMed]
- Flash T, Hogan N. The coordination of arm movements: an experimentally confirmed mathematical model. J Neurosci. 1985;5(7):1688–1703. [PubMed]
- Herzog W. Sensitivity of muscle force estimations to changes in muscle input parameters using nonlinear optimization approaches. J Biomech Eng. 1992;114(2):267–268. [PubMed]
- Herzog W, Binding P. Predictions of antagonistic muscular activity using nonlinear optimization. Math Biosci. 1992;111(2):217–229. [PubMed]
- Johansson RS, Westling G. Roles of glabrous skin receptors and sensorimotor memory in automatic control of precision grip when lifting rougher or more slippery objects. Exp Brain Res. 1984;56(3):550–564. [PubMed]
- Kuo AD, Zajac FE. Human standing posture: multi-joint movement strategies based on biomechanical constraints. Prog Brain Res. 1993;97:349–358. [PubMed]
- Kuzelicki J, Zefran M, Burger H, Bajd T. Synthesis of standing-up trajectories using dynamic optimization. Gait Posture. 2005;21(1):1–11. [PubMed]
- Niu X, Latash ML, Zatsiorsky VM. Prehension synergies in the grasps with complex friction patterns: local versus synergic effects and the template control. J Neurophysiol. 2007;98(1):16–28. [PMC free article] [PubMed]
- Pataky TC, Latash ML, Zatsiorsky VM. Prehension synergies during nonvertical grasping, ii: modeling and optimization. Biol Cybern. 2004;91(4):231–242. [PMC free article] [PubMed]
- Pham QC, Hicheur H, Arechavaleta G, Laumond JP, Berthoz A. The formation of trajectories during goal-oriented locomotion in humans. ii. a maximum smoothness model. Eur J Neurosci. 2007;26(8):2391–2403. [PubMed]
- Plamondon R, Alimi AM, Yergeau P, Leclerc F. Modelling velocity profiles of rapid movements: a comparative study. Biol Cybern. 1993;69(2):119–128. [PubMed]
- Prilutsky BI. Coordination of two- and one-joint muscles: functional consequences and implications for motor control. Motor Control. 2000;4(1):1–44. [PubMed]
- Prilutsky BI, Gregory RJ. Analysis of muscle coordination strategies in cycling. IEEE Trans Rehabil Eng. 2000;8(3):362–370. [PubMed]
- Prilutsky BI, Zatsiorsky VM. Optimization-based models of muscle coordination. Exerc Sport Sci Rev. 2002;30(1):32–38. [PMC free article] [PubMed]
- Raikova RT, Prilutsky BI. Sensitivity of predicted muscle forces to parameters of the optimization-based human leg model revealed by analytical and numerical analyses. J Biomech. 2001;34(10):1243–1255. [PubMed]
- Redl C, Gfoehler M, Pandy MG. Sensitivity of muscle force estimates to variations in muscle-tendon properties. Hum Mov Sci. 2007;26(2):306–319. [PubMed]
- Shim JK, Latash ML, Zatsiorsky VM. Prehension synergies: trial-to-trial variability and hierarchical organization of stable performance. Exp Brain Res. 2003;152(2):173–184. [PMC free article] [PubMed]
- Siemienski A. Direct solution of the inverse optimization problem of load sharing between muscles. J Biomech. 2006;39:S45.
- Tsirakos D, Baltzopoulos V, Bartlett R. Inverse optimization: functional and physiological considerations related to the force-sharing problem. Crit Rev Biomed Eng. 1997;25(4–5):371–407. [PubMed]
- Westling G, Johansson RS. Factors influencing the force control during precision grip. Exp Brain Res. 1984;53(2):277–284. [PubMed]
- Zatsiorsky VM, Latash ML. Multifinger prehension: an overview. J Mot Behav. 2008;40(5):446–476. [PMC free article] [PubMed]
- Zatsiorsky VM, Gregory RW, Latash ML. Force and torque production in static multifinger prehension: biomechanics and control. ii. control. Biol Cybern. 2002;87(1):40–49. [PMC free article] [PubMed]
- Zatsiorsky VM, Gao F, Latash ML. Prehension synergies: effects of object geometry and prescribed torques. Exp Brain Res. 2003;148(1):77–87. [PMC free article] [PubMed]

PubMed Central Canada is a service of the Canadian Institutes of Health Research (CIHR) working in partnership with the National Research Council's national science library in cooperation with the National Center for Biotechnology Information at the U.S. National Library of Medicine(NCBI/NLM). It includes content provided to the PubMed Central International archive by participating publishers. |