[go: up one dir, main page]

JPS6364102A - Learning control system - Google Patents

Learning control system

Info

Publication number
JPS6364102A
JPS6364102A JP20837186A JP20837186A JPS6364102A JP S6364102 A JPS6364102 A JP S6364102A JP 20837186 A JP20837186 A JP 20837186A JP 20837186 A JP20837186 A JP 20837186A JP S6364102 A JPS6364102 A JP S6364102A
Authority
JP
Japan
Prior art keywords
value
command value
acceleration
learning control
gain value
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
JP20837186A
Other languages
Japanese (ja)
Other versions
JPH0782382B2 (en
Inventor
Taku Arimoto
有本 卓
Munehisa Takeda
宗久 武田
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Mitsubishi Electric Corp
Original Assignee
Mitsubishi Electric Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Mitsubishi Electric Corp filed Critical Mitsubishi Electric Corp
Priority to JP61208371A priority Critical patent/JPH0782382B2/en
Publication of JPS6364102A publication Critical patent/JPS6364102A/en
Publication of JPH0782382B2 publication Critical patent/JPH0782382B2/en
Anticipated expiration legal-status Critical
Expired - Lifetime legal-status Critical Current

Links

Landscapes

  • Numerical Control (AREA)
  • Feedback Control In General (AREA)

Abstract

PURPOSE:To quickly execute the converging by performing a next reproduction operation in accordance with a command value to add a value to multiple respective loop gain values to respective deviations to teaching value or a present command value concerning the acceleration, speed and position of a controlled system for respective freedom degrees. CONSTITUTION:A command value arithmetic unit 1 stores the position of a work locus, which is a target, to an object and sets the gain value of respective control loops. Based on the initial setting, a target locus thetad is given as a first command value. At this time, anacceleration signal, a speed signal and a position signal detected for each sample time by a detecting device 7 from a controlled system 6 are stored through an A/D converter 8 to a memory 9. When a first reproducing operation is completed, based on the stored data, an evaluation function J, for example, like an error square integrated value is counted by the device 1 and it is decided whether the function J is larger or smaller than a value Jmin of a prescribed value. When the function J is smaller than the value Jmin, the control is completed and when the function is larger, the device 1 corrects and calculates the command value by using the deviation of the acceleration, etc., with a target value and an output signal.

Description

【発明の詳細な説明】 [産業上の利用分野] この発明はプレイバック形ロボット等のように繰り返し
制御を行う対象物の学習制御方式に係り、特に収束性の
速い学習制御方式に関する。
DETAILED DESCRIPTION OF THE INVENTION [Industrial Application Field] The present invention relates to a learning control method for objects that are repeatedly controlled, such as playback robots, and particularly to a learning control method with fast convergence.

[従来の技術] 一般に、プレイバック形ロボットのような繰り返し制御
で対象物の位置決めを行う場合には、例えば特開昭60
−57409公報に示すように、まず教示動作を行って
対象物に目標とする作業軌跡の位置データ(以下教示値
という)を覚え込ませ、この教示値に従って再生運転を
行うと共に、上記教示値と運転軌跡との差(以下偏差と
いう)を検出して、この偏差を教示値に加えて次回の再
生運転の為の指令値とする学習制御方式が採用されてい
る。
[Prior Art] Generally, when positioning an object using repetitive control such as in a playback type robot, for example, Japanese Patent Laid-Open No. 60
As shown in Publication No. 57409, first, a teaching operation is performed to make the target object memorize the position data of the target work trajectory (hereinafter referred to as the teaching value), and regeneration operation is performed according to this teaching value, and the above teaching value and A learning control method is adopted in which a difference from the driving trajectory (hereinafter referred to as deviation) is detected, and this deviation is added to the teaching value as a command value for the next regenerative operation.

以上の一般的学習制御に、上記公報に示されるように各
自由度毎に動的遅れ時間を考慮して制御する方式、偏差
として目標軌跡と運転軌跡との速度の誤差をとるもの、
加速度の誤差をとるもの等が提案され、学習制御の精度
の向上、安定性が計られていた。
In addition to the general learning control described above, as shown in the above publication, there is a control method that takes into account the dynamic delay time for each degree of freedom, a method that takes the error in speed between the target trajectory and the driving trajectory as the deviation,
Methods that calculate errors in acceleration have been proposed, with the aim of improving the accuracy and stability of learning control.

[発明の解決しようとする問題点〕 しかしながら、従来の学習制御方式では制御系のゲイン
値は安定性を満足する範囲のものが選ばれるのみで、系
の特性を考慮して決定されていなかったので、試行回数
を多く繰り返さなければ目標軌跡に一致しないという問
題点があった。
[Problems to be solved by the invention] However, in the conventional learning control method, the gain value of the control system is only selected within a range that satisfies stability, and is not determined in consideration of the characteristics of the system. Therefore, there was a problem that the trajectory would not match the target trajectory unless a large number of trials were repeated.

この発明は上記のような問題点を解消するためになされ
たもので位置決めの精度が良く、収束性の速い学習制御
方式を得ることを目的とする。
This invention has been made to solve the above-mentioned problems, and aims to provide a learning control system with good positioning accuracy and fast convergence.

[問題点を解決するための手段] この発明に係る学習制御方式は、各自由度毎に、制御対
象の加速度、速度及び位置について、それらの各偏差に
それぞれのループゲイン値を掛けたものを、教示値また
は今回の指令値に加えた指令値に従って、次回の再生運
転を行うようにしたものである [作 用] この発明においては、次回の指令値を定めるのに、加速
度、速度及び位置について、それぞれの制御ループのゲ
イン値を考慮したので、その制御系にあった学習が可能
となり、位置決め精度が良く、収束性の速い学習制御が
実現できる。
[Means for Solving the Problems] The learning control method according to the present invention calculates, for each degree of freedom, the deviations of the acceleration, velocity, and position of the controlled object multiplied by the respective loop gain values. , the next regeneration operation is performed in accordance with the taught value or the command value added to the current command value. [Function] In this invention, acceleration, velocity, and position are Since the gain value of each control loop is taken into consideration, learning suitable for the control system is possible, and learning control with good positioning accuracy and fast convergence can be realized.

[実施例] 以下、この発明の一実施例を図について説明する。第1
図はこの発明の一実施例を示すブロック線図で、図にお
いて、(1)は制御指令値を演算し出力する、例えばデ
ジタル計算機である指令値演算装置、(2)は指令値演
算装置(1)からのデジタル信号をアナログ信号に変換
するD/Aコンバータ、(3)は例えば演算アンプであ
る比較器、(4)は制御回路、(5)はサーボアンプ、
(6)は制御対象物、(7)は、制御対象物(6)から
の位置、速度、加速度を現わす出力信号を検出する検出
器、(8)は、検出器(7)から帰還されたアナログ信
号をデジタル信号に変換するA/Dコンバータ、(9)
はA/Dコンバータ(8)からのデジタル信号を記憶す
るメモリである。
[Example] Hereinafter, an example of the present invention will be described with reference to the drawings. 1st
The figure is a block diagram showing one embodiment of the present invention. In the figure, (1) is a command value calculation device, such as a digital computer, that calculates and outputs a control command value, and (2) is a command value calculation device ( A D/A converter that converts the digital signal from 1) into an analog signal, (3) a comparator that is, for example, an operational amplifier, (4) a control circuit, (5) a servo amplifier,
(6) is an object to be controlled, (7) is a detector that detects output signals representing the position, velocity, and acceleration from the object to be controlled (6), and (8) is a signal that is fed back from the detector (7). A/D converter that converts analog signals into digital signals, (9)
is a memory that stores digital signals from the A/D converter (8).

次に、その動作を第2図に基すいて説明する。Next, the operation will be explained based on FIG.

第2図は指令値演算装置(1)により実行される指令値
演算プログラムを示すフローチャートである。
FIG. 2 is a flowchart showing a command value calculation program executed by the command value calculation device (1).

まず初期設定(ステップ(11))で、教示動作等によ
り、対象物に目標とする作業軌跡の位置データを覚え込
ませるとともに、各制御ループのゲイン値を設定する。
First, in the initial setting (step (11)), the target object is memorized with the position data of the target work trajectory through a teaching operation, and the gain values of each control loop are set.

即ち加速度に対してはモータイナーシャImを、速度に
対しては速度サーボゲインKvを、位置に対しては位置
サーボゲインKPを設定する。つづいて、この初期設定
に基すいて再生運転を行なう(ステップ(12))。1
回目の指令値として、目標軌跡をθdとすると、 U z (t) =I m tj d(t) 十Kv 
Q d(t) + Kp B d (t) ・・(1)
を与える。この時、制御対象物(6)からの出力信号か
ら検出器(7)によって各サンプリング時間ごとに検出
される加速度信号υaX (t)、速度信号Odユ(1
)及び位置信号θd工(1)がA/Dコンバータ(8)
を通してメモリ(9)に記憶される。1回の再生運転が
終了すると、記憶されたデータをもとに指令値演算装置
(1)において、例えば誤差2乗積分値のような評価関
数Jが計算され(ステップ(13))、評価関数Jが所
定の値J winより大か小かが判定され(ステップ(
14))、もし、評価関数Jが所定の値J 1linよ
り小さい場合には制御は終了するが、そうでない場合に
は、指令値演算装置(1)において、目標値と出力信号
との加速度、速度及び位置における偏差 δi (t) =σd(t)−σd工(1)み、(t)
= 6d(t) −bd工(1)e i (t) =θ
d(t)−θdl (t)を用いて指令値U工(1)を
修正し、(2)式により新たな指令値U2(t)を計算
する(ステップ(15))。
That is, motor inertia Im is set for acceleration, speed servo gain Kv is set for speed, and position servo gain KP is set for position. Next, a regeneration operation is performed based on this initial setting (step (12)). 1
As the second command value, if the target trajectory is θd, then U z (t) = I m tj d(t) 10 Kv
Q d (t) + Kp B d (t) ... (1)
give. At this time, the acceleration signal υaX (t) and the speed signal OdY (1
) and position signal θd (1) are A/D converter (8)
It is stored in the memory (9) through. When one regeneration operation is completed, an evaluation function J such as an error squared integral value is calculated in the command value calculation device (1) based on the stored data (step (13)), and the evaluation function It is determined whether J is larger or smaller than a predetermined value J win (step (
14)) If the evaluation function J is smaller than the predetermined value J1lin, the control ends, but if not, the command value calculation device (1) calculates the acceleration between the target value and the output signal, Deviation in velocity and position δi (t) = σd(t) − σd(1), (t)
= 6d(t) -bd(1)e i (t) =θ
The command value U(1) is corrected using d(t)-θdl(t), and a new command value U2(t) is calculated using equation (2) (step (15)).

U2(t)=U1(t)+  In2.(t)+Kve
1(t)+Kpe、(t)・・(2)以下同様の操作を
評価関数JがJ winより小さくなる迄くり返す。
U2(t)=U1(t)+In2. (t)+Kve
1(t)+Kpe, (t) (2) Similar operations are repeated until the evaluation function J becomes smaller than J win.

一般に制御対象の運動方程式には、上記(2)式に示す
各項目のほか、制御対象の慣性のような加速度に依存す
る非線形項、遠心力のような速度に依存する非線形項、
それに重力項、重力補正項などの非線形項が含まれる。
In general, the equation of motion of a controlled object includes, in addition to each item shown in equation (2) above, a nonlinear term that depends on acceleration such as the inertia of the controlled object, a nonlinear term that depends on velocity such as centrifugal force,
It includes nonlinear terms such as gravity terms and gravity correction terms.

今回は1回目の入力U工(1)として系のなかで支配的
な線形項Imjd(t)、Kv6d(t)及びKpOd
(t)を入れているので、この時の誤差は上記非線形項
によるものだけになっており、大幅な修正が行なわれる
。さらに2回目以降の補正にも、系のなかで支配的な線
形項の係数をもとに、加速度、速度及び位置による学習
の修正を行なうため、位置決め精度が上るとともに収束
性の速い学習制御が実現できる。
This time, the linear terms Imjd(t), Kv6d(t), and KpOd that are dominant in the system are used as the first input U(1).
Since (t) is included, the error at this time is only due to the nonlinear term described above, and a significant correction is made. Furthermore, for second and subsequent corrections, learning is corrected based on acceleration, velocity, and position based on the coefficients of the dominant linear terms in the system, which improves positioning accuracy and provides learning control with fast convergence. realizable.

なお、上記実施例では、制御系及び制御対象物をアナロ
グサーボ系としたが、デジタルサーボ系としてもよい。
In the above embodiment, the control system and the controlled object are analog servo systems, but they may also be digital servo systems.

また以上の説明では、1自由度についてだけ説明したが
、他の自由度についても同様に動作させることができる
ことはもちろんである。
Further, in the above explanation, only one degree of freedom has been explained, but it goes without saying that the same operation can be performed for other degrees of freedom.

[発明の効果] 以上のようにこの発明によれば、学習制御方式において
、各自由度毎に、制御系の中で支配的になる線形項の係
数を考慮して、加速度、速度及び位置の学習修正を行な
うようにしたので、その制御系にあった学習が可能とな
り、位置決め精度の良好な収束性の速い学習制御が実現
できる効果がある。
[Effects of the Invention] As described above, according to the present invention, in the learning control method, acceleration, velocity, and position are calculated for each degree of freedom by considering the coefficients of the linear terms that are dominant in the control system. Since learning correction is performed, learning suitable for the control system is possible, and learning control with good positioning accuracy and fast convergence can be realized.

【図面の簡単な説明】[Brief explanation of drawings]

第1図はこの発明の一実施例を示すブロック線図、第2
図はその動作を示すフローチャートである。 図において、(1)は指令値演算装置、(4)は制御回
路、(6)は制御対象物、(7)は検出器、(9)はメ
モリである。
FIG. 1 is a block diagram showing one embodiment of the present invention, and FIG.
The figure is a flowchart showing the operation. In the figure, (1) is a command value calculation device, (4) is a control circuit, (6) is a controlled object, (7) is a detector, and (9) is a memory.

Claims (3)

【特許請求の範囲】[Claims] (1)複数の自由度を有する制御対象を教示値に従って
再生運転させて教示値と再生軌跡との偏差を測定し、次
回の再生運転時には、教示値または今回の指令値に、上
記偏差に基ずく補正値を加えた指令値に従って再生運転
する学習制御を、各自由度毎に行う学習制御方式におい
て、上記学習制御を、上記制御対象の加速度、速度及び
位置について、それらの各偏差にそれぞれのループゲイ
ン値を掛けた値を上記補正値として行うことを特徴とす
る学習制御方式。
(1) A controlled object having multiple degrees of freedom is operated in a regenerative manner according to the taught value, and the deviation between the taught value and the regenerated trajectory is measured. During the next regenerative operation, the taught value or the current command value is adjusted based on the deviation. In a learning control method in which learning control is performed for each degree of freedom, regenerative operation is performed according to a command value to which a drop correction value has been added. A learning control method characterized in that a value multiplied by a loop gain value is used as the correction value.
(2)上記学習制御は、初期教示値を、対象物の目標軌
跡の加速度、速度及び位置のそれぞれに、所定の加速度
ゲイン値、速度ゲイン値及び位置ゲイン値を掛けた値と
し、次回指令値を、上記教示値または今回の指令値に、
各加速度偏差、速度偏差及び位置偏差のそれぞれに上記
加速度ゲイン値、速度ゲイン値及び位置ゲイン値を掛け
たものを加えた値としたものである特許請求の範囲第1
項記載の学習制御方式。
(2) In the above learning control, the initial taught value is set as a value obtained by multiplying the acceleration, velocity, and position of the target trajectory of the object by a predetermined acceleration gain value, velocity gain value, and position gain value, and the next command value to the above taught value or current command value,
Claim 1, which is a value obtained by multiplying each acceleration deviation, speed deviation, and position deviation by the acceleration gain value, speed gain value, and position gain value, respectively.
Learning control method described in section.
(3)上記加速度ゲイン値を制御対象の駆動モータのイ
ナーシャ値とし、速度ゲイン値を速度のサーボゲイン値
とし、そして位置ゲイン値を制御対象の位置のサーボゲ
イン値とした特許請求の範囲第2項記載の学習制御方式
(3) Claim 2 in which the acceleration gain value is an inertia value of the drive motor to be controlled, the speed gain value is a speed servo gain value, and the position gain value is a servo gain value for the position of the control target. Learning control method described in section.
JP61208371A 1986-09-04 1986-09-04 Learning control method Expired - Lifetime JPH0782382B2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP61208371A JPH0782382B2 (en) 1986-09-04 1986-09-04 Learning control method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
JP61208371A JPH0782382B2 (en) 1986-09-04 1986-09-04 Learning control method

Publications (2)

Publication Number Publication Date
JPS6364102A true JPS6364102A (en) 1988-03-22
JPH0782382B2 JPH0782382B2 (en) 1995-09-06

Family

ID=16555177

Family Applications (1)

Application Number Title Priority Date Filing Date
JP61208371A Expired - Lifetime JPH0782382B2 (en) 1986-09-04 1986-09-04 Learning control method

Country Status (1)

Country Link
JP (1) JPH0782382B2 (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0239303A (en) * 1988-07-29 1990-02-08 Okuma Mach Works Ltd Numerical controller containing detecting function for follow-up error
JPH02286188A (en) * 1989-04-28 1990-11-26 Toshiba Corp Stage mechanism controller
JPH0316583A (en) * 1989-06-15 1991-01-24 Toshiba Corp Stage mechanism controller
JPH03139384A (en) * 1989-10-25 1991-06-13 Toshiba Corp Stage mechanism controller
JPH04326102A (en) * 1991-04-25 1992-11-16 Okuma Mach Works Ltd Nc noncircular machining device
JP2015100877A (en) * 2013-11-25 2015-06-04 キヤノン株式会社 Robot control method, and robot control device
CN110154043A (en) * 2018-02-14 2019-08-23 发那科株式会社 The robot system and its control method of study control are carried out based on processing result

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPS56153410A (en) * 1980-04-30 1981-11-27 Mitsubishi Heavy Ind Ltd Position control system
JPS60171506A (en) * 1984-02-15 1985-09-05 Kobe Steel Ltd Learning control method
JPS61190604A (en) * 1985-02-18 1986-08-25 Toyota Motor Corp Position control method for feedback control

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPS56153410A (en) * 1980-04-30 1981-11-27 Mitsubishi Heavy Ind Ltd Position control system
JPS60171506A (en) * 1984-02-15 1985-09-05 Kobe Steel Ltd Learning control method
JPS61190604A (en) * 1985-02-18 1986-08-25 Toyota Motor Corp Position control method for feedback control

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0239303A (en) * 1988-07-29 1990-02-08 Okuma Mach Works Ltd Numerical controller containing detecting function for follow-up error
JPH02286188A (en) * 1989-04-28 1990-11-26 Toshiba Corp Stage mechanism controller
JPH0316583A (en) * 1989-06-15 1991-01-24 Toshiba Corp Stage mechanism controller
JPH03139384A (en) * 1989-10-25 1991-06-13 Toshiba Corp Stage mechanism controller
JPH04326102A (en) * 1991-04-25 1992-11-16 Okuma Mach Works Ltd Nc noncircular machining device
JP2015100877A (en) * 2013-11-25 2015-06-04 キヤノン株式会社 Robot control method, and robot control device
CN110154043A (en) * 2018-02-14 2019-08-23 发那科株式会社 The robot system and its control method of study control are carried out based on processing result
CN110154043B (en) * 2018-02-14 2023-05-12 发那科株式会社 Robot system for learning control based on machining result and control method thereof

Also Published As

Publication number Publication date
JPH0782382B2 (en) 1995-09-06

Similar Documents

Publication Publication Date Title
US10259118B2 (en) Robot system having function of simplifying teaching operation and improving operating performance by learning
JPH0683403A (en) Adaptive pi control system
JP3169838B2 (en) Servo motor control method
JPS6364102A (en) Learning control system
JPS58169212A (en) Position controller of servomotor
JPH0285902A (en) Feedforward controller
JPS6315303A (en) Learning control system
JPS6156880A (en) Line tracking control system
JPH0580805A (en) Adaptive sliding mode control system based on pi control loop
JP2703099B2 (en) Conveyor tracking method for industrial robots
JPS62245401A (en) Learning control system
JPS62174804A (en) Learning control method for industrial robot
Mizuochi et al. Force sensing and force control using multirate sampling method
JPH04362702A (en) Velocity control system in repetitive control
JP3031499B2 (en) Learning control method
JPS6315302A (en) Learning control system
JPH02309402A (en) Servo system
JPS5870315A (en) Controller of robot
JPH02137005A (en) Learning control method
JP2860602B2 (en) Head position control method for magnetic disk device
JP2889086B2 (en) Scanning exposure equipment
JP3232252B2 (en) Positioning control device and positioning control method
JPH06214656A (en) Sliding mode control method provided with damping element
JP2672539B2 (en) Automatic control device
JP2005085111A (en) Method and device for controlling mechanical system

Legal Events

Date Code Title Description
EXPY Cancellation because of completion of term