# Introduction n nautical practice of Vietnam, the ship has been encountered in the special situations, such as: coming approach ship to ship, ship to floating object, ship to mobile object,?In order to control the ship safely in these cases, this researcher has been developing the algorithm of proximity control for linear features which will apply for ship control in closed approach. For this purpose, it's applied the results obtained for the problem to the case of linear systems with constant parameters. Throughout this paper, an area Sand the origin of the phase space will be considered. This task will be called the problem of the regulator of the optimal time. # II. The Problem of Time-Optimal Controller for a Linear System with Constant Parameters It developed a set of control models for ships in closed approach. The linear production is considered that there is a dynamic system [1,4,11,12]. ( ) ( ) ( ) x t Ax t Bu t = + ? (2.1) Where ? Status of system x(t)is an n-dimensional vector; ? Matrix A of the system is a constant matrix size n x n; ? The matrix coefficients of the control functions("gain") B is a constant matrix size n x r; We consider that the system is completely controllable and components u 1 (t), u 2 (t),?, u r (t)limited in size. ( ) 1, 1, 2,..., j u t j r ? = (2.2) At a given initial time t 0 = 0 the initial state of the system is equal to (0) x ? = (2.3) Find the control u*(t) transforming the system from ? to 0 at the minimum time. We denote ? 1 , ? 2 ,?,? n the eigen values of the matrix system A, and through b 1 , b 2 ,?, b r -column vectors of the matrix B ? ? ? ? ? ? ? = ? ? ? ? â??" â??" â??" ? ? ? ? ? ? ? ? ? ? ? ?(2.4) The system is fully controllable. This means that the control transferring system(3.1) from any initial state ? and the origin 0, exist. This occurs if the matrix size n × (rn) 2 1 n G B AB A B A B ? ? ? = ? ? ? ? ???(2.5) It contains n linearly independent column vectors. Entrance y(t) (3.1) is connected with its state x(t) and the control u(t)by the equation: Block diagram of an optimal feedback system is shown in Fig. 3.2. Functions x 1 (t), x 2 (t),?, x n (t)measured at each time and are introduced into a subsystem, designated C("computer"). RF outputs are switching function ([x(t)] which are then fed to the ideal relay R 1 , R 2 ,?, R r for the control variables, timeoptimal. Receiving and developing of functions ([x(t)]is the basis of the problem of optimal control. The Geometric Properties of the Optimal Time Control h 1 [x(t)], h 2 [x(t)],?, h rh 1 [x(t)], h 2 [x(t)],?, h r a) The Surface of the Minimum Time Previously, the author discussed the geometric nature of the problem of optimal time basing on the reachable states areas [2,12]. Then we went from geometric considerations to the analytical results that obtained from the necessary conditions given by the principle of minimum. In this section we try to give a geometric interpretation of the necessary conditions. We assume that the task is normal [12].Note also that the material in this section is a specification of the above mentioned remarks. Consider the surface of the minimum time, and we will treat the optimal control that causes the system to move along the surface of the minimum time in the direction of fastest decrease. After that we will be able to establish a correspondence between the additional variable gradient and surface of minimum time. Our arguments are inherently heuristic, since we are primarily interested in giving a geometric interpretation of the necessary conditions. Let x -the state in the space of phase coordinates. Suppose that there is an optimal control(only) that send sx to 0. We denote the minimum time required for translation x to 0 through: ( ) T x * (3.1) We show that the minimum time T*(x) depends on the state of the x and does not depend explicitly on the time that ( ) 0 T x t * ? = ? (3.2) It is true, as the time-invariant of systems, ( ) ( ) ( ) x t Ax t Bu t = + ? it implies that the minimum time may be only a function of state. In other words, if x is the state of the system at t = 0 and the minimum time required for translation x in 0 is T*(x) and x -state while t = t 0 , then the optimal control will translate x into 0 at time t 0 + T*(x). Since the time required to transfer the system from 0 to 0 is zero and we are considering only positive solutions times, it is obvious that T*(x) has properties as ( ) 0, 0 T x x * = = (3.3) ( ) 0, 0 T x x * > ? (3.4) In the future, for the gradient of function T*(x) of x we use the notation [11] 1 ( ) ( ) ( ) n T x x T x x T x x * * * ? ? ? ? ? ? ? ? ? ? ? = ? ? ? ? ? ? ? ? ? ? ? ? (3.5) We next consider some properties of the function T*(x). It is useful to consider T*(x) as the minimum time surface and present it graphically as shown in Fig. 3.3 Suppose that ( ) S ? is the set of states from which you can go to the origin by using of the optimal control for a timeless than or equal to ? [3,7] 0 1 ? < < (3.11) and consider the state of x, defined by the relation: T x ? * = ) . We show that ' ? ? < (3.14) To prove this, we note: { } ' ' 1 0 ( ) At x e BSIGN B p t dt ? ? * = ? (3.15) From (2.14) it follows that the control { } { } ' 1 2 1 ' 2 ( ) (1 ) ( ) ( ) (1 ) ( ) u t u t S I GN B p t SIGN B p t ? ? ? ? * * * * + ? = + + ? (3.16) converts x to 0. However, this control is not optimal in performance, since it is not a vector whose components are functions of the type of sign. To prove this, let us assume that at some time t ? we have: 1 2 1 1 1 1 ( ) , ( ) 1 1 u t u t * * + ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? = = ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? (3.17) From here you can get 1 2 2 1 1 ( ) (1 ) ( ) 1 1 u t u t ? ? ? * * ? ? ? ? ? ? ? ? ? ? + ? = ? ? ? ? ? ? ? ? ? ? ? ? ? (3.18) But since 0 < ? ? 1, we have the inequality: -1 < 2? -1< +1 (3.19) and therefore the control(3.17)cannot be the optimal time. If this control is not optimal and transform s x to 0 during ?, then the optimal control will require ?' < ?. Thus the statement(2.14) is proved. We have seen that S(?) is a border of ( ) S ? ? . Consequently, the state x = ?x 1 +(1?)x 1 , ? ? (0,1) is an element of the interior ( ) S ? ? and therefore the set ( ) S ? ? is strictly convex. # c) The Heuristic Geometric Proof Note that the minimal isochrones ? "grow" with the ? increase [2,12,18]. Suppose that ? 1 -? 2 two arbitrary time, wherein ? 1 2 0 ? ? < < (3.20) Then we can show that: 1 2 0 ( ) ( ) S S ? ? ? ? ? ? (3.21) Value for inclusion (3.21) means that the minimum isochrones s increase their "distance" from the origin with increasing time, and this increase is "smooth". To clarify this provision, we will give a heuristic geometric proof. Suppose that ? -state when t = 0, and assume that for transfer ? to 0 by means of optimal control u*(t)takes time 0 ? t ? ?. Thus In other words, the components of the gradient vector: 1 ( ) ( ) ( ) ( ) n T x x T x T x x T x x * * * * ? ? ? ? ? ? ? ? ? ? ? ? = = ? ? ? ? ? ? ? ? ? ? ? ? (3.22) It is well-defined functions for all x X ? ? . Gradient T* at. The vector ( ) If t= 0 we have: (0) (0), (0) x A Bu u ? = + ? ? ? (3.23) The direction and magnitude of the vector (0) x? is obviously dependent on a vector A? that depends on the state ? , and the vector Bu(0), magnitude and direction of which can be selected within the constraints (0) u ? ? . If "try" all control u(0) of ? , we get the set of vectors { } (0) x ? that form a cone K. We assume that this cone is shown in Fig. 3.6.Thus, the restriction (0) u ? ? defines regional directions in Fig. 3.6, and we can do so that the vector (0) x ? is directed along ( ) x T x x ? * = ? ? ? . However, there is a vector (0) x * ? pointing in the direction of fastest decrease under the restrictions imposed. We denote the control vector u*(0) such that: (0) (0) x A Bu ? * * = + ? (3.24) Consider the difference between a vector (0) x * ? from all the other possible vectors (0) x ? ? It is easy to see that (0) x * ? satisfies(see. Fig. 3.6) ( ) ( ) (0),(0) , x x T x T x x x x x ? ? * * * = = ? ? ? ? ? ? ? (3.25) for all (0) x K ? ? . Similarly, from (3.23) and (3.24), we find that for all (0) u ? ? . ( ) (0), ( ) (0), Physically, it should beat the point x ? = of optimal control u*(0), because it makes the state of the system or the representative point in the phase space to move, maximizing the rate of change in the minimum time. If u*(0) -optimal control, the prerequisite is known that there is a variable p*(0)in which the relation: ? are states near ? in the isochronous ( ) S ? . We say that 1 ? is the "right" of, ? and 2 ?to the left. The statement " ? is in the corner isochrones ( ) S ? " means that the gradient ( ) ( ) lim lim ' ' 1 , (0) (0),(0) 1 ,x x T x T x x x ? ? ? ? ? ? * * ? ? = = ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? (3.32) Thus, if x ? = we cannot find the direction of the steepest gradient of decreasing just when x ? = , as the latter is not defined. This means that optimal control at this point cannot be determined by a geometrical proof given in the preceding discussion. If, p*(0)however, there is a vector corresponding to ? and u*(0) then (3.31) remains in force. When this line is lost between p*(0) and the normal to the minimum isochrone. The preceding discussion was limited to the initial states located on this minimum isochrone. Note that the same comments by the principle of optimalityare true of any state on x*(t) the optimal trajectory to the origin. Let x*(t)-the state on the optimal trajectory and let p*(t) -corresponding additional variable. Suppose IV. # Conditions for the Existence of Optimal Control a) The Particular Problem of Existence of Optimal Control to the Origin with a Heuristic Point of View In this section is a discussion of consider the optimal control for the control system, which guarantees the existence of optimal control to the origin of any initial state in phase space [1,15,16]. The question of the existence of an optimal control when moving from an arbitrary initial state to an arbitrary area S is extremely complex. It is useful to consider the particular problem of existence of optimal control to the origin with a heuristic point of view. Suppose that we are given a dynamical system is fully controlled and control is limited in size ratio ( ) u t ? ? . Using the assumption of controllability of the system, we can find at least one control that will translate any initial state ? to 0 for a finite time. It may, however, prove that the initial state ? is so far from the origin, which translate to0 can only control that do not meet the limit ( ) u t ? ? . In this case, there are initial states, which cannot be converted into offices 0 satisfying the constraints. We can make the following observations: for a given plane controlled dynamical system [ ] ( ) ( ), ( ) x t f x t u t = ? and the area limitation ? , ndimensional phase space R n can be divided into two subspaces and with the following properties: 1. If ? ? ? ? there exists at least one admissible control transferring to 0for a finite time; 2. If n R ? ? ? ? ? , there is no optimal control taking ? to any of the elements ? ? for the final time (and therefore cannot be translated ? to 0 using a valid management). In essence, the control is not limited to provide sufficient "push" to convert from a state n R ? ? ? to ? ? , and hence the origin. From a physical point of view, control u(t) can add or take away power from the dynamical system. If we imagine the state x = 0 as a state of zero energy, we can see that the system for which the set n R ? ? ? is not empty, in fact unstable. For this reason, it is believed that a stable, fully controlled dynamic system is characterized by the ratio n R ? ? = , and for unstable, there is a fully controlled system n R ? ? ? , but n R ? ? ? . The theorem is useful to confirm this and guarantees the existence of optimal control to the origin of any initial state, it can be formulated as follows. Consider the optimal control for the controlled system ( ) ( ) ( ) x t Ax t Bu t = + ? in accordance with the objective of movement [1,12]. If the eigen values of A are not positive (negative or zero) real parts, then the optimal control to the origin exists for any initial state of n R . A rigorous proof of this theorem can be found in [17]. Consider the example of the essence of the proof of a distinct real eigen values, and the sole control variable u(t). ( ) ( ) ( ), 1, 2,..., ( ) 1 (0) ; 1, 2,..., i i i i i i x t x t b u t i n u t x i n ? ? = + = ? ? ? ? ? = = ? ? (4.1) The solution of (3.2-3.130) for any given formula 0 ( ) ( ) i i t t i i i x t e e b u d ? ? ? ? ? ? ? ? ? ? ? = + ? ? ? ? ? (4.2) Suppose [1,6,15,16]we found an admissible control ( ) u t ? , for that . ( ) 0 n x T x T x T = = = = ? ? ?1 2 ( ) ( ) . . . This means that the ratio: 0 ( ) i T t i i e b u t dt ? ? = ? ? ? ?(4.3) Satisfied for all i = 1, 2, ?, n. Since [1,11,12], it can be concluded that ( ) 1 u t ? ? ( ) from whence [1,11,12]. Thus, if, 1 0 ? ? and 0 0 0 ( ) ()1 i i i i T T t t i i i T i t T i1 1 1 b ? ? ? it is impossible to find T ? such that 1 ( ) 0 x T = ? and therefore there is no optimal control. If all the eigen values i ? are not positive, it is easy to show that the equation (4.4) can be true for any i ? and i = 1, 2, ?, n, as you can pick up a large enough value T ? . This, in turn, means that the optimal control exists for all initial states of the system. # c) The Optimal Control System Consider the optimal control system: ( ) ( ) ( ), ( ) 1, (0 ) x t a xt u t u t x ? = + ? = ? (4.8) If a ? 0, then [1,13]optimal control to the state x = 0 exists for all ?. If a > 0 the system is unstable. We find the region of initial conditions ?? for which there is optimal control. If the optimal control u*(t) exists, and |u*(t)| = 1 we have: The ratio(4.9) is satisfied for some positive end, you must have 1 a ? ? Thus, the scope of the initial values ? ? , for which there exists an optimal control of the origin is determined by the relation: 1 : , 0 a a ? ? ? ? ? ? = < > ? ? ? ? (4.10) If 1 a ? ? then there is an optimal control. Thus, the region is an open set containing the origin [1,6,9,13,16]. The Hamilton-Jacobi Equation # Global # a) The State on Optimal Trajectory and the Value of Optimal Control Previously, they discussed changes in the minimum of time along the optimal path, and examined geometric properties of optimal control problem [2,12]. In this section we relate these concepts together and study Hamilton -Jacobi equation for the problem of optimal performance. The purpose of this section is to show how you can use the overall results for the problem of optimal performance [1,14,15]. Throughout this section we will deal with the optimal control to a normal system ( ) ( ) ( ) x t Ax t Bu t = + ? with field goal(the origin)S = 0. At the same time we use the following notation: if we set the state x, denoted by T*(x) the minimum time required for translation x in 0 and through u*-the value of optimal control in the state x. The specific objectives of this section are as follows: 1) To show how you can use the Hamilton -Jacobi equation to check whether the function T(x) is found by solving the problem of optimal control to be equal T*(x); 2) To point out the difficulties that arise if the assumption that the optimal control is wrong; 3) Noted the difficulties associated with determining optimal control directly from the Hamilton -Jacobi equation. Let us turn to a discussion of the use of the Hamilton-Jacobi, seeing it as a necessary condition. The general theory of the minimum principle can be deduced the following. Let x*-the state on-optimal trajectory and u*the value of optimal control at x*.Since ( ) ( ) 1 , , 0 x x x x T x T x Ax u B x x * * * * * * = = ? ? + + = ? ? (5.1) Provided ( ) x x T x x * * = ? ? that exists. This lemmais useful in the case when the problem of optimal control has been solved and we want to find out whether this function T(x) is to be an expression that determines the minimum time as a function of the state. If this function does not satisfy the equation (5.1), at least at one point, it can be immediately excluded from the number of possible options for the minimum time. We show this in the following example. # b) The Minimum Time Function Suppose that the linear system is described by the following equations: ( ) ( ) 0 1 0 ( ) ( ) 1 ( ) ( ) 0 0 1 x t x t u t u t x t x t ? ? ? ? ? ? ? ? = + ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? (5.2) We can be sure that the best is the control: Suppose that somehow we found a relationship: which expresses the minimum time as a function of state. This suspicion is not unfounded, because ( ) 0 T x > for all x, (0) 0 T = and lim ( ) x T x ?? = ? . We now show that our assumption is wrong. First of all, we calculate the gradient T(x).From (5.5) we find that this gradient is: 1 1 2 2 ( ) ( ) ( ) T x x x T x x T x x x ? ? ? ? ? ? ? ? ? ? ? = = ? ? ? ? ? ? ? ? ? ? ? ? ?(5.6) We calculate it by x = x ? and x = x ? : 1 ( ) 4 ? ? ? ? ? ? ? ? ? ? ? ? + ? = ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? = + ? =? ? (5.9) It can be concluded that the T(x) ratio(5.4) cannot be a formula that expresses the minimum amount of time, because when x = x ? equation(5.1) is not satisfied. Let's see what happens if we experience T(x) at x = x?. In its left-hand side of (5.1) is equal to: [ ] 0 1 1 1 1 1 , 1 0,1 0 0 1 2 2 1 1 2 0 ? ? ? ? ? ? ? ? ? ? ? ? + ? = ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? = + ? =31![Fig. 3.1 : The structure of the algorithm is an open problem of optimal high-speed](image-2.png "Fig. 3 . 1 :") 32![Fig. 3.2 : Structure of the time-optimal control systems with feedback III.](image-3.png "Fig. 3 . 2 :") 33![Fig. 3.3 : The surface of the minimum value (minimum time) T*(x) as a function of x Next, we define the concept of minimum isochrones. b) The Minimum Isochrones Let S(?)-the set of states from which you can go to 0 for the same minimum time ?, ?? 0. We call S(?)minimal isochrone ?. This function S(?) is defined by { } ( ) : ( ) ; 0 S x T x ? ? ? * = = ? (3.6)](image-4.png "Fig. 3 . 3 :") ![that x -on the condition (open) segment joining x 1 and x 2 , as shown in Fig. 3.4.Choose:](image-5.png "") 34![Fig. 3.4 : Illustration convexity From (3.12), (3.10) and (3.9), we obtain:](image-6.png "Fig. 3 . 4 :") 3536![Fig. 3.5 : Minimum isochrones S(?)and S(?-?), ?>0. -Optimal path from x*(?) to 0is a part-optimal path from to 0According to the principle of optimal control u*(t) to t ? ? ? ? have optimal control taking the ( )x ? *](image-7.png "Fig. 3 . 5 :Fig. 3 . 6 :") ![of the most rapid changes in the function T*(x)at the point . As shown in Fig.3.6 gradient is normal to the curve ( ) S ? at the point x ? = and directed from "origin". The direction of the vector ( ) phantom in Fig.3.6) determines the direction of "fastest decrease" on the surface T*(x) at a point. So, if we construct the surface T*(x) and put it in a ballpoint , it begins to roll down the surface T*(x) in the direction of the vector .](image-8.png "") ![control u*(0)satisfying(3.27), sets the direction of fastest decrease(compatible with restrictions)along the surface of the minimum time T*(x)at the point x ? = . This control u*(0)satisfying(3.27) must also satisfy the relation; ? -arbitrary vector directed "outside" and the minimum normal isochrones ( ) S ? at the point ? .](image-9.png "") ( ) ? = S ?{: ( ) ; x T x ? ? * ??} 0(3.7)Equations (3.5) and (3.6), we conclude that there is a subset ( ) S ? ? of ( ) S ? . It can be sure that ( ) S ? is the boundary and closed ( ) S ? ? [4, 13].We prove that the set of ( ) S ? ?is strictly convex.Let x 1 and x 2 -two different states at the ? -minimumisochrones.1 x S ?( ), ?2 x S ?( ) ?(3.8)In view of normality, we know that there are onlyoptimal control1 u t ( ) *= ?{ SIGN B p t ' 1 ( ) } *transform x 1 to0, and2 u t ( ) *= ?{ SIGN B p t ' 2 ( ) } *transform x 2 to 0. Thus,the equations should be valid:1 x{ e BSIGN B p t dt } ' 1 ( ) At0 © 2015 Global Journals Inc. (US) Proximity Control for Linear Features -Application for Ship Control in Closed Approach test, we can conclude that T(x) may be the minimum time. However, the test for x = x ? excludes this possibility. Suppose now that in determining the optimal control mistake. For example, we believe that. Then, instead of (5.10), we obtain: [ ] (5.12) and could be removed from consideration T(x). It is true that T(x) ? T*(x), but on the basis of the expression (5.12) it is impossible to conclude, as incorrectly set u* = +1 for x ? . In other words, if you make a mistake in determining the optimal control law, in the course of such checks can be excluded from consideration the correct dependence T*(x). In practice, this item (5.2) is not very useful, since the engineer often need to build an optimal feedback system, and not check if T(x)is equal this optimal T*(x) or not. Nevertheless, the use of the Hamilton -Jacobi is essential matters in theoretical studies and validates the results obtained by using the minimum principle. Hamilton -Jacobi equation was largely seen as a sufficient condition. Let us now discuss the problem of solving the Hamilton-Jacobi and finding the optimal control. We know that the Hamiltonian of the problem of the optimal control [6,16]: ## H x p u Ax p S I GN B p B p a x p a x p Consider the partial differential equation(Hamilton -Jacobi): Suppose that we were able to find a solution: (5.17) differential equation in partial derivatives(5.16), and 1) Function 2) The control vector: Substituting this solution into the equation system, we have: The solution of this equation: (5.21) It has property: In other words, the decision ( ) T x ? of Hamilton -Jacobi equation (5.16) c) The Locally Optimal Control Suppose we are given a system of first order [1,13]: Require to translate an arbitrary initial state ? to 0 in a minimum time. The Hamiltonian for this problem has the form: where the minimum control H is defined by: Hamilton -Jacobi equation for this problem has the form: We define two areas of X 1 and X 2 (onedimensional) phase space as following: It is easy to see that the function; is the solution of differential equations in partial derivatives(5.25) for all x X X ? ? , because of: Note that: Consider the control of u ? , defined as: and u ? undefined when x = 0 (5.35) Assume that 1 X ? ? , then 0 ? > As a result of the substitution of the expression (5.33) into (5.32), we obtain: (5.37) From (5.27) we have: Further, for all 0, ( ) It means that: Thus, the control is unchanged: And is an optimal time control for all 1 x X ? Similarly it is proved that, the control: is also the optimal time control for all 2 x X ? Thus, we find the area X 1 and X 2 , such that VI. ## Conclusion For this system, we did not encounter any difficulties in finding the optimal control using the Hamilton -Jacobi equation, as it was simple enough: ? To guess the solution of Hamilton -Jacobi, satisfying the boundary conditions [5,10,12,14]; ? Identify two areas of X 1 and X 2 ; If we try to find the optimal control for systems of higher order, at once confronted with the following challenges: It is almost impossible to find a solution of the Hamilton -Jacobi systems higher than second order. For the system n th order, it is necessary to subdivide the phase space at least 2nof areas X 1 , X 2 ,?,X n , indicate that for the systems of higher than second order is extremely difficult. Therefore, at present the optimal design of feedback systems often is carried out by using the necessary conditions of the minimum principle, but not the sufficient conditions of the equation Hamilton -Jacobi. [12,15]. In general, we can conclude that: ? A procedure for obtaining control for linear objects in closed approach which is provided in relation to the movement of vessels [7,8]. ? The analysis of the structure of the optimal control system obtained by the developed control algorithms, based on which we can design and create a control system to ensure the meeting of movements of ships. These results of further research will be presented in next article. * MAthans PFalbi Optimal control. -M.: Engineering * Using of the Minimum Principle to find an optimal control for ship sailing through the narrow channel NXPhuong Page(s Proceeding of conference on the Transportation publisher * Synchronization of dynamic systems IIBlekhman Science * Controlling a linear first order time varying parameter system using a linear constant parameters control law. Decision and Control MBonilla MMalabre RVelasquez Proceedings of the 32nd IEEE Conference on the 32nd IEEE Conference on 1993 2 ISBN 0-7803-1298-8. Page(s * The foundations of steering and manoeuver-ing DClarke Proceedings the IFAC conference on manoeuvering and controlling marine crafts the IFAC conference on manoeuvering and controlling marine craftsGirona, Spain 2003 * Applied ma-thematics in engineering and economic calculations/ Collection of scientific papers VKLoparev AVMarkov YMaslov VStructure 2001 St. Petersburg * Dynamic model in inverse problems of traffic control. Collection of scientific papers "Managing transport systems YMKulibanov SPb.: SPGUVK 1995 * HInose THamar Traffic Control 1983. -248p Transport * Zemlyanovsky DK.Calculation Elements Maneuvering for Preventing Collisions WilliamSLevine Proc. Inst / Novosibirsk Institute of Water Transport Engineers Inst / Novosibirsk Institute of Water Transport EngineersNew York CRC Press 1996. 1960. 46p The Control Handbook * Random Data: Analysis and Measurement Procedures JuliusSBendat AllanGPiersol 2011 WILLEY 4th Edition * Objectives of meeting movement -Application for ship in maneuvering NXPhuong VNBich International Journal of Mechanical Engineering and Applications 2330- 0248 3 3 -1 2015 * SPSethi GLThompson Optimal Control Theory: Ap-plications to Management Science and Economics Springer 2000 2nd ed. * Optimal Control with Engineering Applications HPGeering 2007 Springer * Analytical Mechanics LNHand JDFinch 2008 Cambridge University Press * Mathematical Methods for Physicists GArfken 1985 Academic Press FL 3rd ed. Or-lando * ABryson Yu-ChiHo 1972 Mir 544 * Ls Pontryagin Boltianskyvg Lvgamkrelidze Mishchenko The mathematical theory of optimal processes. -M .:Science1969. -384 ?