CN104754178B - audio control method - Google Patents
audio control method Download PDFInfo
- Publication number
- CN104754178B CN104754178B CN201310754800.8A CN201310754800A CN104754178B CN 104754178 B CN104754178 B CN 104754178B CN 201310754800 A CN201310754800 A CN 201310754800A CN 104754178 B CN104754178 B CN 104754178B
- Authority
- CN
- China
- Prior art keywords
- audio
- track
- acoustic image
- sub
- control
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 51
- 239000000463 material Substances 0.000 claims abstract description 183
- 230000000694 effects Effects 0.000 claims abstract description 24
- 238000012545 processing Methods 0.000 claims description 28
- 238000005516 engineering process Methods 0.000 abstract description 4
- 230000004888 barrier function Effects 0.000 abstract 1
- 230000000875 corresponding effect Effects 0.000 description 80
- 238000010586 diagram Methods 0.000 description 16
- 239000011159 matrix material Substances 0.000 description 12
- 238000012986 modification Methods 0.000 description 6
- 230000004048 modification Effects 0.000 description 6
- 230000001276 controlling effect Effects 0.000 description 5
- 230000005236 sound signal Effects 0.000 description 5
- 241000196324 Embryophyta Species 0.000 description 4
- 238000004891 communication Methods 0.000 description 4
- 238000012217 deletion Methods 0.000 description 4
- 230000037430 deletion Effects 0.000 description 4
- 239000000203 mixture Substances 0.000 description 3
- 238000010008 shearing Methods 0.000 description 3
- 101150093559 TRIM2 gene Proteins 0.000 description 2
- 101150036597 TRIM3 gene Proteins 0.000 description 2
- 230000002787 reinforcement Effects 0.000 description 2
- 235000006508 Nelumbo nucifera Nutrition 0.000 description 1
- 240000002853 Nelumbo nucifera Species 0.000 description 1
- 238000013459 approach Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 230000000295 complement effect Effects 0.000 description 1
- 230000003247 decreasing effect Effects 0.000 description 1
- 230000003111 delayed effect Effects 0.000 description 1
- 230000003028 elevating effect Effects 0.000 description 1
- 230000002045 lasting effect Effects 0.000 description 1
- 238000011017 operating method Methods 0.000 description 1
- 238000004321 preservation Methods 0.000 description 1
- 230000011218 segmentation Effects 0.000 description 1
- 238000004088 simulation Methods 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 230000001360 synchronised effect Effects 0.000 description 1
Landscapes
- Studio Circuits (AREA)
Abstract
The present invention relates to device for performing arts control technology, specifically a kind of audio control method.This method includes:Time shaft is shown on the display interface of integrated control platform;The track for being controlled corresponding performing device is added and/or deletes, the track includes light track;Editing rail attribute;Add material;Edit material attribute;Integrated control platform sends out corresponding control instruction according to each track attribute and its material attribute.The present invention can solve the editors of current items on the program and Synchronization Control and acoustic image and run the technical barrier of effect control.
Description
Technical field
The present invention relates to device for performing arts control technology, specifically a kind of audio control method.
Background technology
In items on the program layout process, a comparison distinct issues are that each profession (refers to audio, video, light, machine
Tool etc.) between coordination and Synchronization Control.In large-scale performance, each profession is relatively independent, need one it is huger
Troop could ensure the smooth layout and performance of performance.And during each professional program of layout, the most of the time is all spent
Coordination between profession with it is synchronous above, and compared with being really absorbed in may be much more in the time of program in itself.
Since each profession is relatively independent, control mode differs greatly.To carry out live audio-visual synchronization editor, depending on
Frequency is controlled by lamp control platform, and audio plays back editor control by more rails, and audio is easy to navigate to the arbitrary time and starts back
It puts, but frame number (can only be adjusted to corresponding position, but cannot follow the time by video manually by operating personnel from the beginning
Code starts), this is short of enough flexibilities for live performance control.
In addition, after the speaker position of the Professional sound box system of existing video display, stage is fixed, pass through stage both sides
Left and right channel master amplifies or acoustic image is substantially set in the middle position of stage by main amplify in three road of left, center, right, although performance field
Other than the master on stage amplifies, large number of speaker is also provided in each position, but whole field is performed, speaker
The acoustic image of system almost can seldom change.
Therefore, it solves the editor of current items on the program and the run flexible control of effect of Synchronization Control and acoustic image is all this
Technical field key technical problem urgently to be resolved hurrily.
Invention content
The technical problem to be solved by the present invention is to provide one kind can simplify the occasions such as video display, stage performance it is multi-specialized control and
The audio control method flexibly quickly set can be carried out to the acoustic image of sound reinforcement system.
In order to solve the above technical problems, the technical solution adopted by the present invention is:
A kind of audio control method, including:
Audio track is added, one or more audio tracks that are parallel and being aligned in the time shaft are added on display interface
Road, each audio track correspond to an output channel;
Edit audio track attribute;
Add audio material, add one or more audio materials in audio track, and in audio track generation with
The corresponding audio material icon of the audio material, length and the audio material of the audio track occupied by the audio material icon
Total duration match;
Audio material attribute is edited, the audio material attribute includes start position, final position, time started, end
Time, total duration, reproduction time length.
Compared with prior art, it has the beneficial effect that:Playback master control embodies the theory of " performance integrated management ".From technology
Angle for, the lotus roots of these units closes that property is very low, they can be with operating alone without influencing each other, uniquely than more prominent
Contact be " time ", i.e., when what is broadcasting.For the angle used from user, the relationship of " time " is
Their most concerned things.It is checked and is managed if the state of these units can be concentrated in together, user is omitted from
Many unnecessary troubles.Such as coordinate the stationary problem between each unit, the mutual reference of each profession in editing saving
With comparing amendment etc..
Description of the drawings
Fig. 1 is the audio control method schematic diagram of embodiment.
Fig. 2 is the method schematic diagram of the audio frequency control part of the audio control method of embodiment.
Fig. 3 is the operating method schematic diagram of the audio sub-track of the audio control method of embodiment.
Fig. 4 is the method schematic diagram of the video control portions of the audio control method of embodiment.
Fig. 5 is the method schematic diagram of the signal light control part of the audio control method of embodiment.
Fig. 6 is the method schematic diagram of the apparatus control portion point of the audio control method of embodiment.
Fig. 7 is the principle schematic of the performance integrated control system of embodiment.
Fig. 8 is the principle schematic of the audio frequency control module of the performance integrated control system of embodiment.
Fig. 9 is the principle schematic of the video control module of the performance integrated control system of embodiment.
Figure 10 is the principle schematic of the lighting control module of the performance integrated control system of embodiment.
Figure 11 is the principle schematic of the device control module of the performance integrated control system of embodiment.
Figure 12 is more rails playback editor module interface schematic diagram of the audio control method of embodiment.
Figure 13 is the principle schematic of the audio frequency control part of the performance integrated control system of embodiment.
Figure 14 is the principle schematic of the track matrix module of the performance integrated control system of embodiment.
Figure 15 is the principle schematic of the video control portions of the performance integrated control system of embodiment.
Figure 16 is the principle schematic of the signal light control part of the performance integrated control system of embodiment.
Figure 17 is the principle schematic of the apparatus control portion point of the performance integrated control system of embodiment.
Figure 18 is the step schematic diagram of the change rail acoustic image method for controlling trajectory of embodiment.
Figure 19 is the change rail acoustic image track data generation method step schematic diagram of embodiment.
Figure 20 is the speaker distribution map of embodiment and becomes rail acoustic image track schematic diagram.
Figure 21 is the triangle speaker node schematic diagram of embodiment.
Figure 22 is the variable domain acoustic image track data generation method step schematic diagram of embodiment.
Figure 23 is the speaker distribution map of embodiment and variable domain acoustic image track schematic diagram.
Figure 24 is the fixed point acoustic image track data generation method step schematic diagram of embodiment.
Figure 25 is the speaker link data creation method step schematic diagram of embodiment.
Figure 26 is the speaker link schematic diagram of embodiment.
Specific embodiment
The acoustic image TRAJECTORY CONTROL all types of to the present invention is further described below in conjunction with the accompanying drawings.
The present embodiment provides one kind can simplify the occasions such as video display, stage performance it is multi-specialized control and can be to sound reinforcement system
Acoustic image carry out the flexibly audio control method that quickly sets.This method plays back editor module by more rails of integrated control platform, real
Now to the concentration layout and control of multiple professional materials.As shown in Figure 1, the audio control method includes the following steps:
S101:Time shaft is shown on the display interface of integrated control platform;
S102:Add and/or delete the track for being controlled corresponding performing device;
S103:Editing rail attribute;
S104:Add material;
S105:Edit material attribute;
S106:Integrated control platform sends out corresponding control instruction according to each track attribute and its material attribute.
As shown in Fig. 2 and Figure 12, which is included for more rail audio playback controls(With following audio frequency controls
Module corresponds to), specifically include following steps:
S201:Audio track is added, one or more that is parallel and being aligned in the time shaft is added on display interface
Audio track(Region)1st, 2, each audio track corresponds to an output channel.
S202:Audio track attribute is edited, it is mute that editable audio track attribute includes lock on track, track.Track
Whether mute attribute can control audio material on this track and all sub-tracks mute, be the master control of audio track.Track locks
Determine attribute to can control on track except mute and in addition to sub-track is hidden in addition etc. outside individual attributes, in other attributes and audio track
Material position and material attribute cannot be changed.
S203:Add audio material, added in audio track 1,2 one or more audio materials 111,112,113,
211st, 212,213,214, and audio material corresponding with the audio material is generated in audio track, occupied by the audio material
The length of audio track match with the total duration of the audio material.Before audio material is added, first from audio server
Audio material list is obtained, audio material addition is then selected from the audio material list again and enters audio track.Work as audio
After material is added to audio track, by generation and the corresponding audio attribute file of the audio material, integrated control platform passes through editor
Audio attribute file is sent to the instruction of audio server rather than directly invokes or edit the corresponding source of audio material to control
File, it is ensured that the stability of the safety of source file and integrated control platform.
S204:Edit audio material attribute, the audio material attribute include start position, final position, the time started,
End time, total duration, reproduction time length.Wherein, the start position is the audio material start position(Along Vertical Square
To)Corresponding time shaft moment, the final position are the audio material final position(Vertically)When corresponding
Countershaft moment, the time started are the practical beginning playing time of the audio material on a timeline, and the end time is
The physical end play position of the audio material on a timeline.In general, the time started can be delayed in start position, terminate
Time can shift to an earlier date in final position.Total duration refers to the script time span of audio material, start position to terminal position when
Between difference be audio material total duration, reproduction time length refers to the reproduction time length of the audio material on a timeline,
The time difference of time started and end time are the reproduction time length of the audio material.By adjusting time started and end
Time can realize the shearing manipulation to acoustic image material, i.e., only play user and wish the part played.
By adjusting(Transverse shifting)Position of the audio material in audio track can change start position and terminal position
It puts, but start position will not change with the relative position of final position on a timeline, i.e., the length of audio material will not change.
At the beginning of by adjusting audio material and the end time can change the actual play time of audio material on a timeline
And its length.Multiple audio materials can be placed in one audio track, are represented within the period represented by time shaft, it can be with
(Through corresponding output channel)Multiple audio materials are played successively.It should be noted that the audio material in any audio track
Position(Time location)It can freely adjust, but should not be overlapped between each audio material.
Further, due to integrated control platform, only to audio material, corresponding property file controls, and integrates control
Platform can also carry out shearing operation and concatenation to audio material.Shearing operation refers to an audio element on audio track
Material is divided into multiple audio materials, while each audio material after segmentation has corresponding property file, at this time source file
Still intact, integrated control platform sends out control command according to these new property files and source file is called to be played accordingly successively
It is operated with audio.Similar, concatenation refers to two audio materials being merged into an audio material, corresponding category
Property Piece file mergence be a property file, by a property file send out control audio server call two audio source documents
Part.
Further, multigroup application entity behaviour corresponding with each audio track respectively can also be set on integrated control platform
Make key, to manually adjust the attribute of audio material by physical operation key.Such as increase to the audio material position in audio track
It puts(Time shaft position)The material reproduction time adjustment knob of front and rear adjustment.
S205:Add audio sub-track 12,13,14,15,21,22, addition and wherein one audio track corresponding one
A or multiple audio sub-tracks, each audio sub-track are parallel to the time shaft, and the audio sub-track is corresponding
The output channel of the audio track corresponds to.
Each audio track can have an attached audio sub-track, the type of audio sub-track include acoustic image sub-track and
Audio sub-track.Wherein, the acoustic image sub-track is used to carry out acoustic image to the part or all of audio material of affiliated audio track
Trajectory processing, the audio sub-track are used to carry out audio effect processing to the part or all of audio material of affiliated audio track.
In this step, it can further perform the step of:
S301:Acoustic image sub-track harmony pixel material is added, one or more acoustic image materials are added in acoustic image sub-track
121st, 122, and acoustic image material corresponding with the acoustic image material, the sound occupied by the acoustic image material are generated in the acoustic image sub-track
Total duration as corresponding to the length of sub-track with the acoustic image material matches.
S302:Acoustic image sub-track attribute is edited, similar with the audio track, editable acoustic image sub-track attribute includes
Lock on track, track are mute.
S303:Acoustic image material attribute is edited, similar with the audio material, the acoustic image material attribute also includes starting point position
Put, final position, the time started, the end time, total duration, reproduction time length.
By the acoustic image material on acoustic image sub-track, can between the acoustic image material starting and end time when
Between in section, acoustic image trajectory processing is carried out to the signal that output channel corresponding to the affiliated audio track of acoustic image sub-track exports.Cause
This adds different types of acoustic image material on acoustic image sub-track, can carry out inhomogeneity to the signal that corresponding output channel exports
The acoustic image trajectory processing of type;And the start position by adjusting each acoustic image material, final position, the time started and at the end of
Between, time and acoustic image path effect duration that acoustic image trajectory processing starts can be adjusted.
Difference lies in what audio material represented is audio data for acoustic image material and audio material.Acoustic image track data is
Within the period of setting length, in order to make the acoustic image edge that each virtual speaker node output level is formed in speaker distribution map
Preset path is run or is remained stationary as, the output level data that each speaker node changes over time.That is acoustic image track number
According to containing in speaker distribution map output level delta data of whole speaker nodes in the setting length of time section.Acoustic image
The type of track data includes fixed point acoustic image track data, becomes rail acoustic image track data and variable domain acoustic image track data, acoustic image rail
The type of mark data determines the type of acoustic image material, and the acoustic image movement total duration corresponding to acoustic image track data determines acoustic image
The total duration of time difference between material start position and final position, i.e. acoustic image material.Acoustic image trajectory processing refers to according to sound
As the size of track data pair each speaker entity reality output level corresponding with each speaker node is adjusted, make speaker
The acoustic image of physical system was run or is remained stationary as along setting path within the period of setting length.
S304:Audio sub-track is added, the type of the audio sub-track includes volume and gain sub-track 13,22, EQ
Sub-track 14,15,21, each audio track can set a volume and gain sub-track and one or more EQ sub-tracks.
Wherein, the volume and gain sub-track are used to adjust the signal level size of the corresponding output channel of affiliated audio track
Whole, the signal that the EQ sub-tracks are used for the output to the corresponding output channel of affiliated audio track carries out EQ audio effect processings.
S305:Edit the attribute of audio sub-track, the attribute of the audio sub-track include lock on track, track it is mute and
Outside track identities, audio effect processing parameter corresponding with audio sub-track type is further included.For example, volume and gain sub-track include
Audio effect processing parameter for output level size adjustment parameter, the audio effect processing parameter that EQ sub-tracks include is EQ processing parameters.
The sound that the affiliated audio track of audio sub-track corresponds to output channel can be adjusted by the sound effect parameters for changing audio sub-track
Effect.
S206:Preserve data or according to audio track and its attribute of sub-track, the life of audio material harmony pixel material attribute
Pairs of audio material corresponds to the control instruction of source file, and play out control to the source file of audio material according to the control instruction
System and acoustic image, audio effect processing control.
Control instruction includes deciding whether to call(It plays)The audio source file of audio material, at the beginning of source file plays
Between and the end time(Be subject to time shaft at the time of), the acoustic image and audio effect processing of source file, specific control instruction and each sound
The attribute of frequency track and its attached sub-track, audio material, acoustic image material attribute correspond to.That is audio track is not straight
It connects calling and handles the source file of audio material, and only handle the property file corresponding to the audio source file, pass through editor
The property file, addition/editor's acoustic image material and the attribute of audio track and its sub-track for adjusting source file are realized to audio
The indirect control of source file.
Such as playlist, when which starts broadcasting, the sound will be entered added to the audio material of audio track
Frequency material will be played;By editing audio track attribute, the mute attribute of audio track can be controlled, the sound can be controlled
Whether frequency track and its attached sub-track are mute(Effectively), lock attribute by editing audio, can control on track except mute and
Outside the hiding sub-track of addition etc. outside individual attributes, material position and material attribute in other attributes and audio track cannot repair
Change(Lock-out state).More detailed description can refer to the narration of front.
As shown in Fig. 4 and Figure 12, the audio control method of the present embodiment is also an option that increase video playback control(With under
State video control module correspondence), specifically include following steps:
S401:Add track of video,(On display interface)Add track of video 4 that is parallel and being aligned in the time shaft
(Region), the track of video corresponds to a controlled plant, video server used in the present invention.
S402:Track of video attribute is edited, it is mute that editable track of video attribute includes lock on track, track.Video
Track attribute is similar with audio track attribute.
S403:Video material is added, the one or more video materials 41,42,43,44 of addition in track of video, and
Video material corresponding with the video material is generated in track of video, the length of the track of video occupied by the video material is with being somebody's turn to do
The total duration of video material matches.Before video material is added, video material list first is obtained from video server, then
Video material addition is selected from the video material list again and enters track of video.After video material is added to track of video,
It will generate and be sent with the corresponding video attribute file of the video material, integrated control platform by editing video attribute file to control
Instruction to video server rather than directly invoke or edit the corresponding source file of video material, it is ensured that the safety of source file
Property and integrated control platform stability.
S404:Edit video material attribute, the video material attribute include start position, final position, the time started,
End time, total duration, reproduction time length.Video material attribute is similar with audio material attribute, while audio material also may be used
It is one group corresponding with track of video to carry out transverse shifting, cutting and concatenation or increase adjustment on integrated control platform
Physical operation key, to manually adjust the attribute of video material by physical operation key.
S405:Preserve data or according to track of video attribute, the attribute generation of video material corresponds to source document to video material
The control instruction of part, and control and acoustic image, audio effect processing control play out the source file of video material according to the control instruction
System.Similar with track of video, the attribute of specific control instruction audio track, the attribute of video material correspond to.
As depicted in figure 5 and figure 12, the audio control method of the present embodiment is also an option that increase signal light control(With following lamps
Photocontrol module corresponds to), specifically include following steps:
S501:Light track is added,(On display interface)Add light track 3 that is parallel and being aligned in the time shaft
(Region), the light track corresponds to a controlled plant, light network signal adapter used in the present invention(Such as Artnet nets
Card).
S502:Light track attribute is edited, it is mute that editable light track attribute includes lock on track, track.Light
Track attribute is similar with audio track attribute.
S503:Light material is added, one or more light materials 31,32,33 are added in light track, and in light
Light material corresponding with the light material, length and the light of the light track occupied by the light material are generated in track
The total duration of material matches.Similar with audio material, video material, light track is there is no loading light material
Generation property file corresponding with the light material source file, sends out control instruction to control light material source by property file
The output of file.
Light material is the light network control data of certain time length, such as Artnet data, Artnet data envelope
Equipped with DMX data.Light material can generate in the following manner:After the good light program of conventional lights control platform layout, integrated control platform leads to
The light network interface that its light network interface is connected on conventional lights control platform is crossed, records the signal light control letter of lamp control platform output
Number, while integrated control platform needs to stamp timing code to the light controling signal recorded in recording process, so as in light rail
The enterprising edlin control in road.
S504:Edit light material attribute, the light material attribute include start position, final position, the time started,
End time, total duration, reproduction time length.Light material attribute is similar with audio material attribute, while audio material also may be used
It is one group corresponding with light track to carry out transverse shifting, cutting and concatenation or increase adjustment on integrated control platform
Physical operation key, to manually adjust the attribute of light material by physical operation key.
S505:Preserve data or according to light track attribute, the attribute generation of light material corresponds to source document to light material
The control instruction of part, and control and acoustic image, audio effect processing control play out the source file of video material according to the control instruction
System.Similar with track of video, the attribute of specific control instruction audio track, the attribute of video material correspond to.
As shown in Fig. 6 and Figure 12, the audio control method of the present embodiment, which is also an option that, increases device control(With following dresses
Put control module correspondence), specifically include following steps:
S601:Adding set track,(On display interface)Addition is parallel to one or more devices of the time shaft
Track 5(Region), one controlled device of each described device track correspondence, such as mechanical device.It is needed before adding set track
Confirm controlled device and integrated control platform establishes connection.It is integrated to control platform and connection be established by TCP by control device,
Such as integrated control platform is arranged to TCP server, each controlled device is arranged to TCP Client, the TCP client termination of controlled device
The integrated TCP server for controlling platform is actively connected to after entering network.
S602:Editing device track attribute, it is mute that editable device track attribute includes lock on track, track.Device
Track attribute is similar with audio track attribute, if device track selects mute, the attached control sub-track of whole of device track
Any operation is not performed.
S603:Addition control sub-track adds one or more sub- rails of control corresponding with a wherein described device track
Road, each control sub-track are parallel(And for)The time shaft, the corresponding described device of each control sub-track
Controlled plant corresponding to track corresponds to.
S604:Addition control material according to the control material of the type addition respective type of control sub-track, and is being added
It is generated on the control sub-track added and controls sub- material accordingly, the control sub-track length occupied by the control material and the control
The total duration of material matches.
The type of sub-track is controlled to include TTL controls sub-track, relay control sub-track, network control sub-track, phase
It answers, may be added to that the control material of TTL control sub-tracks includes TTL materials 511,512,513(As TTL high level controls element
Material, TTL low level control materials), may be added to that relay sub-track control material include relay material 521,522,
523、524(As relay opens control material, relay closes control material), may be added to that the control material of network control sub-track
Including network materials 501,502,503(As TCP/IP communication control material, UDP communication control material, 232 communication control materials,
485 protocol communications control material etc.).Sub- material is controlled by addition accordingly, corresponding control instruction can be sent out, controls son element
Material is substantially exactly control instruction.
S605:The sub- material attribute of editor control, attribute include start position, final position, total duration.By adjusting(It is horizontal
To movement)Control position of the sub- material in accordingly control sub-track that can change start position and final position, but starting point position
Putting will not change with the relative position of final position on a timeline, i.e., the length of audio material will not change.Control material
Start position is the time shaft moment begun to send out with the corresponding control instruction of control material to corresponding controlled device, is terminated
Position is to terminate the time shaft moment for sending control instruction.
Further, incidence relation can also be set between the control material in same control sub-track, makes to be located at
The point position corresponding time shaft moment controls the corresponding control command of material to be not carried out success earlier, then will not send out(Collection
Into control platform)Or it does not perform(Controlled device)The later corresponding control of association control material of start position corresponding time shaft moment
Instruction, such as folding, the elevating control of curtain.
Further, the guard time of certain time length can be set before and after the control material of sub-track by controlling, i.e., in control
Track cannot add control material in guard time or cannot send out control command.
S606:It preserves data or according to control track and its attribute of control sub-track, the attribute of material is controlled to generate control
System instruction, and the control instruction is sent to corresponding controlled device.
In addition, the present embodiment also provides a kind of performance integrated control system, as shown in fig. 7, the system includes integrated control platform
70 and selection include audio server 76, video server 77, lighting control module 78 and device control module 79.Wherein,
The integrated control platform 70 includes Multi-track editing playback module 71, and the Multi-track editing playback module 71 can perform above-mentioned performance and integrate
One or more controls in the control of control method sound intermediate frequency, video control, signal light control and device control are controlled, it is specific real
Details are not described herein for existing step.More rail playback editor control modules include audio frequency control module 72 and selection includes regarding
Frequency control module 73, lighting control module 74 and device control module 75.
As shown in figure 8,72 pieces of the audio frequency control mould includes audio track add module 81, audio track attributes edit mould
Block 82, audio material add module 83, audio material attributes edit module 84, audio sub-track add module 85, preservation data/
Audio frequency control instruction module 86 is exported, the function that these modules are realized corresponds respectively with abovementioned steps S201 to S206,
Details are not described herein, similarly hereinafter.
Further, the audio broadcasting control principle of the performance integrated control system is as shown in figure 13, the integrated control
System further includes quick playback editor module, is physically entered module, more rails playback editor module, is used described in quick playback editor module
In real-time edition audio material, and send out corresponding control instruction and play the corresponding source document of audio material to audio server 76
Part, the physical operations key for being physically entered module and corresponding on integrated control platform 71, for the sound to external input set into control platform
Source carries out real-time tuning control.
Correspondingly, audio mixing matrix module, track matrix module, 3x1 output mix modules are equipped in the audio server
With physics output module, the audio mixing matrix module can be received from the quick playback editor module, more rails playback editor's mould
Audio source file in the audio server that block is called by control command form is and the audio signal that exports and described
The audio signal of module output is physically entered, the similar track matrix module can also receive above-mentioned each road audio input.Institute
Audio mixing matrix is stated for output after each road audio input progress stereo process to the output mix module, the track matrix
Module is used to after carrying out acoustic image trajectory processing to each road audio input export to the output mix module.The output audio mixing mould
Block can be received from audio mixing matrix module, track matrix module and the audio output for being physically entered module, after 3x1 stereo process
Each physics output interface output through the physics output module.Wherein, acoustic image trajectory processing refers to according to acoustic image track number
It is adjusted according to the level exported to each speaker entity, makes the acoustic image of speaker physical system within the period of setting length
It runs or remains stationary as along setting path.
In the present embodiment, the source file of audio material is stored on the audio server outside integrated control platform, and more rails return
The source file that editor module did not directly invoked and handled audio material is put, and only handles the category corresponding to the audio source file
Property file, pass through editor adjust source file property file, addition/editor's acoustic image material and audio track and its sub-track
Attribute realize to the indirect control of audio source file, therefore the corresponding output channel output of each audio track only for
Then control signal/instruction of audio source file performs audio source file by receiving the audio server of the control instruction again
Various processing.
As shown in figure 14, more rail playback editor modules receive effective audio material list from audio server 76,
Audio source file is not handled directly, and audio source file is stored in the audio server, is receiving corresponding control command
After recall audio source file and carry out various audio effect processings, such as stereo process is carried out into audio mixing matrix module, into track
Matrix module carries out trajectory processing.Acoustic image material is actually also control command, can both be stored in integrated control platform 71, can also
It is uploaded to audio server.
As shown in figure 9, the video control module 73 includes track of video add module 91, track of video attributes edit mould
Block 92, video material attributes edit module 94, preserves data/output video control instruction module at video material add module 93
95, the function that these modules are realized corresponds respectively with abovementioned steps S401 to 405.
Further, the video editing of the performance integrated control system is as shown in figure 15 with playing control principle, described
Integrated control platform does not perform the source file of video material directly, but by obtaining video material list and corresponding attribute text
Part sends out control instruction to video server, and video server, which connects, broadcasts the source file execution of video material further according to control instruction
It puts and effect operation.
As shown in Figure 10, the lighting control module 74 includes light track add module 110, light track attributes edit
Module 120, light material attributes edit module 140, preserves data/output signal light control instruction at light material add module 130
Module 150, the function that these modules are realized correspond respectively with abovementioned steps S501 to 505.
Further, the signal light control principle of the performance integrated control system is as shown in figure 16, and the integrated control platform is also
Module is recorded equipped with light signal, for recording the light controling signal of lamp control platform output, and to being recorded in recording process
Light controling signal stamp timing code, so as in the control of light track enterprising edlin.
As shown in figure 11, described device control module 75 includes device track add module 151, device track attributes edit
Module 152, control material add module 154, control material attributes edit module 155, is protected at control sub-track add module 153
Deposit data/output signal light control instruction module 156, the function that these modules are realized respectively with abovementioned steps S601 to 606 1
One corresponds to.
Further, the device control principle of the performance integrated control system is as shown in figure 17, and the integrated control platform is defeated
All kinds of devices control signal gone out is exported through each protocol interface on device adapter to corresponding controlled plant.
In addition, the integrated control platform can also include making(Generation)Acoustic image track data(That is acoustic image material)Sound
As track data generation module, the acoustic image track data obtained through the module is for more rail playback editor's execution module tune
With so as to which audio server track matrix module be controlled to control acoustic image track.Further, the present embodiment provides
A kind of change rail acoustic image method for controlling trajectory, the control method is by controlling host(Such as integrated control platform, audio server)To entity
The output level value of each speaker node of sound box system is configured, and acoustic image is made to be moved in the way of setting in the total duration of setting
Or it is static, as shown in figure 18, which includes:
S101:Generate acoustic image track data;
S102:In the total duration corresponding to the acoustic image track data, according to acoustic image track data, it is real to adjust each speaker
The output level of body;
S103:In the total duration, by the incoming level for being input to each speaker physical signal and corresponding speaker entity
Output level is overlapped to obtain the level of each speaker entity reality output.
Acoustic image track data referred within the period of setting length(That is the lasting total duration of acoustic image), in order to make integrated control
The acoustic image that each virtual speaker node output level is formed in virtual speaker distribution map on platform is run along preset path
It moves or remains stationary as, the output level data that each speaker node changes over time.I.e. acoustic image track data contains speaker distribution
Output level delta data of whole speaker nodes in the setting length of time section in map.Each speaker node is come
Say, its output level size changes with time change in the set period of time, it is also possible to be zero, negative even
It is negative infinite, it is preferential using negative infinite.
Each speaker node corresponds to a speaker entity in entity sound box system, and each speaker entity includes being located at same
One or more speakers at one position.I.e. each speaker node can correspond to one or more co-located speakers.
In order to which entity sound box system is allow accurately to reappear acoustic image path, the virtual speaker of each sound in speaker distribution map
The position distribution of node should speaker provider location distribution each with entity sound box system it is corresponding, in particular so that each speaker node it
Between relative position relation, the relative position relation between each speaker entity is corresponding.
The level of speaker entity reality output is real with the speaker in the level of input signal and above-mentioned acoustic image track data
The output level superposition gained of the corresponding speaker node of body.The former be input signal characteristic, the latter can be considered as speaker reality
The characteristic of body itself.At any one time, different input signals just has different incoming levels, and for same speaker reality
Body, only there are one output levels.It is, therefore, understood that acoustic image trajectory processing is at the output level to each speaker entity
Reason, to form preset acoustic image path effect(It is stationary including acoustic image).
Incoming level and the output level superposition of speaker entity can be before audio signal actually enter speaker entity first
It being handled, can also again be handled after speaker entity is entered, this link for depending on entire public address system is formed, with
And whether speaker entity is built-in with audio-frequency signal processing module, such as DSP unit.
The type of acoustic image track data includes:It pinpoints audio-visual-data, become rail acoustic image track data and variable domain acoustic image track.
On integrated control platform during simulation generation acoustic image track data, the speed and process of acoustic image are controlled for convenience, the present invention is real
Sequentially connected line segment is applied between several acoustic image TRAJECTORY CONTROL points that example passes through discrete distribution in speaker distribution map to represent sound
The path of running of picture determines the path of running of acoustic image, Yi Jisheng by several acoustic image TRAJECTORY CONTROL points of discrete distribution
The overall running time of picture.
Acoustic image is pinpointed, is referred within the period of setting length, the one or more speakers selected in speaker distribution map
Node constantly output level, and unselected speaker node output level numerical value is zero or negative infinite situation.Correspondingly, it is fixed
Point audio-visual-data refers within the period of setting length, the one or more speaker nodes selected in speaker distribution map are held
Continuous ground output level, and unselected speaker node is when output level or output level numerical value are not zero or negative infinite, Ge Geyin
The output level data that case node changes over time.For selected speaker node, its output level is in the setting time
Continuously(There may also be upper and lower fluctuating change);And for unselected speaker node, its output level in the setting time
It remains negative infinite.
Become rail acoustic image, refer within the period of setting length, in order to which acoustic image is made to run along preset path, each speaker node
According to the situation of certain rule output level.Correspondingly, become rail acoustic image track data, refer within the period of setting length,
In order to which acoustic image is made to run along preset path, output level data that each speaker node changes over time.Acoustic image runs path simultaneously
Do not need to be exactly accurate, and acoustic image moves(It runs)Duration will not be very long, it is only necessary to substantially build audience and can recognize that
Acoustic image run effect.
Variable domain acoustic image referred within the period of setting length, in order to which acoustic image is made to run along predeterminable area, each speaker node
The situation that changes according to certain rule of output level.Correspondingly, variable domain acoustic image track data referred in the time of setting length,
In order to which acoustic image is made to run along predeterminable area, output level data that each speaker node changes over time.
As shown in figure 19, becoming rail acoustic image track data can obtain by the following method:
S201:Speaker node is set:In speaker distribution map 10, addition or deletion speaker node 11, referring to Figure 20.
S202:Change speaker nodal community:The attribute of speaker node includes speaker coordinate, speaker type, corresponds to export and lead to
Road, initialization level, speaker title etc..Speaker node is represented in speaker distribution map with speaker icon, passes through mobile sound
Case icon can change its coordinate position.Speaker type refers to full-range cabinet or ultralow frequency speaker, and concrete type can be according to reality
It is divided.Each speaker node in speaker distribution map is all distributed there are one output channel, each output channel pair
It should include one or more sounds at co-located place in a speaker entity in entity sound box system, each speaker entity
Case.I.e. each speaker node can correspond to one or more co-located speakers.In order to reappear in speaker distribution map
Designed acoustic image is run path, and the position distribution of speaker entity should be with the position distribution pair of speaker node in speaker distribution map
It should.
S203:Divide delta-shaped region:As shown in figure 20, according to the distribution of speaker node, speaker distribution map is divided
Multiple delta-shaped regions, three vertex of each delta-shaped region are speaker node;Each delta-shaped region is not overlapped, and every
Other speaker nodes, each speaker node and an output channel are not included in a delta-shaped region(Or audio plays dress
It puts)It is corresponding;
Further, it can also assist determining delta-shaped region, the auxiliary sound box by setting auxiliary sound box node
Node does not have corresponding output channel, not output level;
S204:Setting acoustic image TRAJECTORY CONTROL point and path of running:Acoustic image is set to change over time in speaker distribution map
Run path 12 and several acoustic image TRAJECTORY CONTROL points 14 for running on path positioned at this.Following method may be used to set
Determine acoustic image to run path and acoustic image TRAJECTORY CONTROL point:
1st, fixed point structure:Determine several acoustic image TRAJECTORY CONTROL points successively in speaker distribution map(Coordinate)Position,
Several acoustic image TRAJECTORY CONTROL points are in turn connected to form acoustic image to run path, first determining acoustic image TRAJECTORY CONTROL point pair
It is zero at the time of answering, is to work as from determining first acoustic image TRAJECTORY CONTROL point to determining at the time of follow-up acoustic image TRAJECTORY CONTROL point corresponds to
The time that preceding acoustic image TRAJECTORY CONTROL point is undergone.It such as can be by clicking sign(Such as mouse pointer)It is distributed ground in speaker
Acoustic image TRAJECTORY CONTROL point is clicked on figure, determines that an acoustic image TRAJECTORY CONTROL point-to-point hits determining next acoustic image track control from clicking
System point elapsed time determines the time span between two acoustic image tracing points, and each acoustic image track is finally calculated
At the time of corresponding to point;
2nd, dragging generation:Label is dragged in speaker distribution map(Such as mouse pointer)Along arbitrary line, curve or broken line
Movement path so that it is determined that acoustic image is run, during label is dragged, since initial position, at interval of a period of time Ts all
An acoustic image TRAJECTORY CONTROL point can be generated on the path of running.Ts is 108ms in the present embodiment;
S205:Edit acoustic image TRAJECTORY CONTROL point attribute:The attribute of acoustic image TRAJECTORY CONTROL point is sat including acoustic image TRAJECTORY CONTROL point
Cursor position, it is corresponding at the time of, to the time needed for next acoustic image TRAJECTORY CONTROL point.It can be to selected acoustic image TRAJECTORY CONTROL point institute
Time and acoustic image at the time of corresponding, needed for the selected acoustic image TRAJECTORY CONTROL point to next acoustic image TRAJECTORY CONTROL point run path pair
One or more of total duration answered is modified.
Assuming that it is ti at the time of acoustic image TRAJECTORY CONTROL point i is corresponded to, acoustic image is run from acoustic image TRAJECTORY CONTROL point i to next track
The point i+1 former required times are ti ', and acoustic image runs the corresponding total duration in path as t.This means that acoustic image is run from initial position
The time needed to acoustic image TRAJECTORY CONTROL point i is ti, and the time that acoustic image runs through needed for entire path is t.
It is complete before the acoustic image TRAJECTORY CONTROL point if modifying at the time of to corresponding to a certain acoustic image TRAJECTORY CONTROL point
The run total duration in path of portion's acoustic image TRAJECTORY CONTROL point corresponding moment and acoustic image is required for being adjusted.If acoustic image
It is ti at the time of TRAJECTORY CONTROL point i originals correspond to, is Ti at the time of correspondence after modification, any sound before acoustic image TRAJECTORY CONTROL point i
It is Tj for tj at the time of correspondence as TRAJECTORY CONTROL point J originals, at the time of correspondence after adjustment, acoustic image is run the corresponding original total duration in path
For t, modified total duration is T, then Tj=tj/ti*(Ti-ti), T=t+(Ti-ti).The adjustment mode letter that the present invention uses
It is single reasonable, and calculation amount very little.
It is understood that after time modification corresponding to any acoustic image TRAJECTORY CONTROL point, the time increased or decreased can
Whole acoustic image TRAJECTORY CONTROL points before distributing to the acoustic image TRAJECTORY CONTROL point in identical duration ratio(That is aforementioned manner), also may be used
With the whole acoustic image TRAJECTORY CONTROL points run on path by each acoustic image of duration pro rate.During using latter approach, it is assumed that sound
Be ki as TRAJECTORY CONTROL point i prepares the increased time, then acoustic image TRAJECTORY CONTROL point will be modified at the time of correspondence Ti=
(ki*ti/t)+ ti, i.e. time ki are not all to distinguish dispensing acoustic image TRAJECTORY CONTROL point, and each acoustic image TRAJECTORY CONTROL point is all
Portion of time can be distributed in its ratio with path total duration of running.
If the time needed for a certain acoustic image TRAJECTORY CONTROL point to next acoustic image TRAJECTORY CONTROL point is adjusted, then next
At the time of corresponding to acoustic image TRAJECTORY CONTROL point and the run total duration in path of acoustic image is required for being adjusted.If acoustic image track
Be Ti for ti at the time of control point i originals correspond to, at the time of correspondence after modification, acoustic image from current acoustic image TRAJECTORY CONTROL point i run to
It is Ti ' the required times after ti ' modifications that the next tracing point i+1 former required time, which be, acoustic image run the corresponding original in path it is total when
A length of t, modified total duration are T, then Ti+1=Ti+Ti ', T=t+(Ti-ti)+(Ti’-ti’).
If modification acoustic image is run the corresponding total duration in path, then the acoustic image is run each acoustic image TRAJECTORY CONTROL on path
It will be all adjusted at the time of corresponding to point and its to the time needed for next acoustic image TRAJECTORY CONTROL point.If acoustic image TRAJECTORY CONTROL point
It is ti at the time of i originals correspond to, is Ti at the time of correspondence after adjustment, acoustic image is run from current acoustic image TRAJECTORY CONTROL point i to next rail
It time needed for after ti ' adjustment be Ti ' that the mark point i+1 former required times, which be, and the acoustic image corresponding original total duration in path of running is t,
Modified total duration is T, then Ti=ti/t* (T-t)+ti, Ti '=ti '/t*(T-t)+ti’.
S206:Record becomes rail acoustic image track data:Each speaker node is recorded to run process along setting path of running in acoustic image
In each moment output level numerical value.
For becoming for rail acoustic image, can calculate by the following method electric for generating the output of the related speaker node of acoustic image
Level values.As shown in figure 21, it is assumed that acoustic image tracing point i(It is not necessarily acoustic image TRAJECTORY CONTROL point)It is enclosed positioned at by three speaker nodes
It is ti at the time of acoustic image tracing point i is corresponded to, the three of vertex position speaker node will be defeated at this time in the delta-shaped region formed
Go out a certain size level, the output level value of other speaker nodes in speaker distribution map other than these three speaker nodes
It is zero or negative infinite, the acoustic image so as to ensure the ti moment in speaker distribution map is located at above-mentioned acoustic image tracing point i.For this three
The output level of the speaker node A of any apex of angular domain, this moment ti are dBA1=10*lg(LA’/LA), wherein LA’
For remaining two straight distance of vertex institute structure of the acoustic image tracing point to the delta-shaped region, LAFor the speaker node A to remaining
The two straight distances of vertex institute structure;
Further, each speaker node can also set initialization level value.Assuming that the initialization of above-mentioned speaker node A
Level is dBA, then in above-mentioned moment ti, the output level dB of speaker node A1A1’=dBA+10*lg(LA’/LA).Remaining speaker
The output level of t moment after node setting initialization level.
Further, as shown in figure 20, if any part acoustic image tracing point(Or acoustic image is run path)Any one is not fallen within
In the delta-shaped region be made of three speaker nodes(Movement locus end), then auxiliary sound box node 13 can be set to set
New delta-shaped region, to ensure that whole acoustic image tracing points are each fallen in corresponding delta-shaped region, the auxiliary sound box node
There is no corresponding output channel, not output level, be only used for assisting determining delta-shaped region;
Further, it in the output level value for recording each speaker node, can continuously record, it can also be according to certain
Frequency records.For the latter, refer to record the output level numerical value of primary each speaker node at interval of certain time.In this reality
It applies in example, acoustic image is recorded in along the output for setting each speaker node when path is run using the frequency of 25 frames/second or 30 frames/second
Level value.The output level data of each speaker node are recorded by certain frequency, it is possible to reduce data volume is accelerated to inputting audio
Signal carries out processing speed during acoustic image trajectory processing, ensures that acoustic image is run the real-time of effect.
As shown in figure 22, variable domain acoustic image track data can obtain by the following method:
S501:Speaker node is set:In speaker distribution map, addition or deletion speaker node.
S502:Change speaker nodal community:The attribute of speaker node includes speaker coordinate, speaker type, corresponds to export and lead to
Road, initialization level, speaker title etc..Speaker node is represented in speaker distribution map with speaker icon, passes through mobile sound
Case icon can change its coordinate position.Speaker type refers to full-range cabinet or ultralow frequency speaker, and concrete type can be according to reality
It is divided.Each speaker node in speaker distribution map is all distributed there are one output channel, each output channel pair
It should include one or more sounds at co-located place in a speaker entity in entity sound box system, each speaker entity
Case.I.e. each speaker node can correspond to one or more co-located speakers.In order to reappear in speaker distribution map
Designed acoustic image is run path, and the position distribution of speaker entity should be with the position distribution pair of speaker node in speaker distribution map
It should.
S503:Setting acoustic image, which is run, path and divides acoustic image region:Multiple acoustic image regions are set in speaker distribution map,
Each acoustic image region includes several speaker nodes, and set the path of running for traversing each acoustic image region.I.e. by acoustic image area
Domain is considered as one " acoustic image point ", and acoustic image is run from a region to another region, until running through whole acoustic image regions successively.It can
It, can also be quick to set in the following manner with the acoustic image region of each complementary overhangs of setting any in speaker distribution map
Acoustic image region:
Straight line acoustic image is set to run path in speaker distribution map, and several acoustic image areas are set along acoustic image path of running
Domain, the boundary in each acoustic image region are approximately perpendicular to the direction of running of the acoustic image.These acoustic image regions can be arranged side by side, and also may be used
To be arranged at intervals, but in order to ensure raw acoustic image movement(It runs)Continuity, mode is arranged side by side in preferential selection.These acoustic image areas
The gross area in domain is less than or equal to the area of entire speaker distribution map.When dividing acoustic image region, wide division can be used,
Not wide division may be used.
It, can be by dragging sign during concrete operations(Such as mouse pointer)Acoustic image to be set to run path and division simultaneously
Acoustic image region.Specifically:Dragging sign is moved to end in speaker distribution map from a certain start position along some direction
Point position, while several acoustic image regions are divided according to the air line distance equalization of the start position to the final position, Ge Gesheng
As the boundary in region is perpendicular to the straight line of the start position to the final position, and the width in each acoustic image region is impartial.Sound
As running total duration the time that middle final position undergone is moved to from initial position to drag sign.
Assuming that air line distance of the sign from start position to final position is R, total duration used is t, and equalization is drawn
The quantity for dividing acoustic image region is n, then will automatically generate the n acoustic image region that width is R/n, and corresponding to each acoustic image region
At the time of be t/n.
S504:Acoustic image zone time attribute is edited, at the time of including corresponding to acoustic image region, current acoustic image region is to next
The time required to acoustic image region and acoustic image is run total duration.The editor of acoustic image area attribute compiles with becoming rail acoustic image tracing point attribute
It collects similar.If modifying at the time of to corresponding to a certain acoustic image region, whole acoustic image regions before the sound area domain are respectively
At the time of corresponding and total duration that acoustic image is run is required for being adjusted.If to a certain acoustic image region to next acoustic image region
The required time is adjusted, then at the time of corresponding to next acoustic image region and acoustic image total duration of running is required for carrying out
Adjustment.If modification acoustic image is run total duration, then at the time of corresponding to each acoustic image region that the acoustic image is run on path and its
It will be all adjusted to the time needed for next acoustic image region.
S505:Variable domain acoustic image track data is recorded, each speaker node is recorded and is run successively in acoustic image along path of running is set
During each acoustic image region, the output level numerical value at each moment.
For variable domain acoustic image, it can calculate by the following method electric for generating the output of the related speaker node of acoustic image
Level values.
As shown in figure 23, it is assumed that the acoustic image of a certain variable domain track runs total duration as t, is divided into 4 equal widths altogether
Acoustic image region, acoustic image run path from some acoustic image region 1 along the acoustic image of straight line(Acoustic image region i)To next acoustic image region
2(Acoustic image region i+1)Mobile, the acoustic image midpoint of line segment that path is located in acoustic image region 1 of running is acoustic image TRAJECTORY CONTROL point 1
(Acoustic image TRAJECTORY CONTROL point i), the acoustic image midpoint of line segment that path is located in acoustic image region 2 of running is acoustic image TRAJECTORY CONTROL point 2(Sound
As TRAJECTORY CONTROL point i+1).During acoustic image tracing point P runs from current acoustic image region 1 to next acoustic image region 2, acoustic image
The output level of each speaker node is domain 1dB in region 1(Domain dBi), the output level of each speaker node in acoustic image region 2
For domain 2dB(Domain dBi+1), the speaker node output level other than the two acoustic image regions is zero or negative infinite.
Domain 1dB values=10logeη÷2.3025851
Domain 2dB values=10logeβ÷2.3025851
Wherein, l12The distance of acoustic image TRAJECTORY CONTROL point 2, l are arrived for acoustic image TRAJECTORY CONTROL point 11PFor acoustic image TRAJECTORY CONTROL point 1
To the distance of acoustic image tracing point P, lp2For current acoustic image tracing point P to the distance of acoustic image TRAJECTORY CONTROL point 2.It can be with from above-mentioned formula
Each acoustic image tracing point is found out there are two acoustic image region output level, but when acoustic image tracing point is located at the control of each acoustic image track
During system point, only one of which acoustic image region output level, such as when acoustic image tracing point P moves to acoustic image TRAJECTORY CONTROL point 2, this
When there was only 2 output level of acoustic image region, and the output level in acoustic image region 1 is zero.
In the output level value for recording each speaker node in variable domain acoustic image track, can continuously record, it can also be according to one
Fixed frequency records.For the latter, refer to record the output level numerical value of primary each speaker node at interval of certain time.
In the present embodiment, each speaker node when acoustic image edge setting path is run is recorded in using the frequency of 25 frames/second or 30 frames/second
Output level value.The output level data of each speaker node are recorded by certain frequency, it is possible to reduce data volume is accelerated to input
Audio signal carries out processing speed during acoustic image trajectory processing, ensures that acoustic image is run the real-time of effect.
As shown in figure 24, fixed point acoustic image track data can obtain by the following method:
S701:Speaker node is set:In speaker distribution map, addition or deletion speaker node.
S702:Change speaker nodal community:The attribute of speaker node includes speaker coordinate, speaker type, corresponds to export and lead to
Road, initialization level, speaker title etc..
S703:Acoustic image tracing point and total duration are set, one or more speaker node, institute are selected in speaker distribution map
Selected each speaker node sets acoustic image tracing point in each speaker node residence time as acoustic image tracing point.
S704:Record fixed point acoustic image track data:Record the output at each speaker node each moment in above-mentioned total duration
Level numerical value.
In addition, the acoustic image track data of the present invention further includes speaker link data.Speaker link refers to hold speaker node
Row is operation associated, when the active speaker node output level being associated in speaker node, is associated with the passive sound box section of speaker node
It puts automatic output level.Speaker link data are passive sound boxes after being associated operation to several selected speaker nodes
Node relative to active speaker node output level difference.For it is necessary to link associated speaker node in spatial distribution
Distance can relatively.
As shown in figure 25, speaker link data can obtain by the following method:
S801:Speaker node is set:In speaker distribution map, addition or deletion speaker node.
S802:Change speaker nodal community:The attribute of speaker node includes speaker coordinate, speaker type, corresponds to export and lead to
Road, initialization level, speaker title etc..
S803:Speaker node link relationship is set:Selected ultralow frequency speaker node is connected to neighbouring multiple full ranges
Speaker node;
S804:Record speaker link data:The output level DerivedTrim of the ultralow frequency speaker is calculated and records,
The output level DerivedTrim=10*log (Ratio)+DeriveddB, Ratio=∑ 10(Trim-i+LinkTrim-i)/10, wherein
Trim-i is the output level value of any full-range cabinet node i itself, and LinkTrim-i is the full-range cabinet node i
Original setting and the level that links of the ultralow frequency speaker, DeriveddB are the initialization level value of the ultralow frequency speaker node,
DerivedTrim links to the output level value after several full-range cabinet nodes for the ultralow frequency speaker node sets.
One ultralow frequency speaker node may be configured as linking to one or more full-range cabinet nodes, after link, when full-range cabinet node
Output level, then the ultralow frequency speaker node linked with it will automatic output level, to coordinate the battalion of full-range cabinet node
Make certain sound effect.For a ultralow frequency speaker node link to a full-range cabinet node, the two need to be only considered
Distance, source of sound property and required audio etc. can set ultralow frequency speaker node and the full-range cabinet node followed to broadcast automatically
Output level when putting, that is, link level.
As shown in figure 26, it is assumed that the ultralow frequency speaker node 4 in speaker distribution map links to 3 neighbouring full-range cabinets
Node, itself output level value of full-range cabinet node 21,22,23 are respectively Trim1, Trim2 and Trim3, ultralow frequency speaker
Node 24 originally with each full-range cabinet point 21,22,23 link level value be respectively LinkTrim1, LinkTrim2 with
LinkTrim3.If it is Ratio that level, which is totally added ratio, ultralow frequency speaker node 4 itself initialization level value is
DeriveddB, last 4 output level value of ultralow frequency speaker node are DerivedTrim, then have:
Ratio=10(Trim1+LinkTrim1)/10+10(Trim2+LinkTrim2)/10+10(Trim3+LinkTrim3)/10
DerivedTrim=10*log(Ratio)+DeriveddB
When Ratio is more than 1, ultralow frequency speaker node 24 is linked to gained output level after these three full-range cabinet nodes
It is 0, i.e., its final output level value is initialization level value.
Claims (3)
1. a kind of audio control method, it is characterised in that including:
Audio track is added, one or more audio tracks that are parallel and being aligned in time shaft are added on display interface, each
The audio track corresponds to an output channel;
Edit audio track attribute;
Audio material list is obtained from audio server;
Audio material is added, one or more audio materials selected from the audio material list are added in audio track,
And audio material icon corresponding with the audio material is generated in audio track and is generated corresponding with the audio material
Audio attribute file, audio attribute control file are used to be sent to the instruction of audio server, the sound by integrated control platform control
The length of audio track occupied by frequency material icon matches with the total duration of the audio material;
Edit audio material attribute, the audio material attribute include start position, final position, the time started, the end time,
Total duration, reproduction time length;
Audio sub-track is added, addition one or more audio sub-tracks corresponding with wherein one audio track are each described
Audio sub-track is parallel to the time shaft, the output channel pair of the corresponding audio track of the audio sub-track
Should, the type of the audio sub-track includes acoustic image sub-track and audio sub-track;
Acoustic image sub-track harmony pixel material is added, one or more acoustic image materials are added in acoustic image sub-track, and in the acoustic image
Acoustic image material icon corresponding with the acoustic image material is generated in sub-track, the acoustic image sub-track occupied by the acoustic image material icon
Length matches with the total duration corresponding to the acoustic image material, the acoustic image material be acoustic image track data, the acoustic image track
Data include variable domain acoustic image track data, and the variable domain acoustic image track data refers in the total duration of setting, in order to make acoustic image
It runs along predeterminable area, the output level data that each speaker node changes over time;
Edit acoustic image sub-track attribute;
Edit acoustic image material attribute, it is the acoustic image material attribute packet start position, final position, the time started, the end time, total
Duration, reproduction time length.
2. audio control method according to claim 1, which is characterized in that the method further includes:
Add audio sub-track;
The attribute of audio sub-track is edited, the attribute of the audio sub-track includes audio effect processing parameter, by changing audio
The sound effect parameters of track can adjust the audio that the affiliated audio track of audio sub-track corresponds to output channel.
3. audio control method according to claim 2, it is characterised in that:
The type of the audio sub-track, which includes volume and gain sub-track, EQ sub-tracks, each audio track, can set one
Volume and gain sub-track and one or more EQ sub-tracks, the volume and gain sub-track are used for affiliated audio track
The signal level size of the corresponding output channel in road is adjusted, and the EQ sub-tracks are for corresponding to affiliated audio track defeated
The signal for going out the output of channel carries out EQ audio effect processings.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201310754800.8A CN104754178B (en) | 2013-12-31 | 2013-12-31 | audio control method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201310754800.8A CN104754178B (en) | 2013-12-31 | 2013-12-31 | audio control method |
Publications (2)
Publication Number | Publication Date |
---|---|
CN104754178A CN104754178A (en) | 2015-07-01 |
CN104754178B true CN104754178B (en) | 2018-07-06 |
Family
ID=53593239
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201310754800.8A Active CN104754178B (en) | 2013-12-31 | 2013-12-31 | audio control method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN104754178B (en) |
Families Citing this family (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106937023B (en) * | 2015-12-31 | 2019-12-13 | 上海励丰创意展示有限公司 | multi-professional collaborative editing and control method for film, television and stage |
CN106937022B (en) * | 2015-12-31 | 2019-12-13 | 上海励丰创意展示有限公司 | multi-professional collaborative editing and control method for audio, video, light and machinery |
CN106937021B (en) * | 2015-12-31 | 2019-12-13 | 上海励丰创意展示有限公司 | performance integrated control method based on time axis multi-track playback technology |
CN105827997A (en) * | 2016-04-26 | 2016-08-03 | 厦门幻世网络科技有限公司 | Method and device for dubbing audio and visual digital media |
CN106454646A (en) * | 2016-08-13 | 2017-02-22 | 厦门傅里叶电子有限公司 | Method for synchronizing left and right channels in audio frequency amplifier |
CN109949792B (en) * | 2019-03-28 | 2021-08-13 | 优信拍(北京)信息科技有限公司 | Multi-audio synthesis method and device |
CN110392045B (en) * | 2019-06-28 | 2022-03-18 | 上海元笛软件有限公司 | Audio playing method and device, computer equipment and storage medium |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1437137A (en) * | 2002-02-06 | 2003-08-20 | 北京新奥特集团 | Non-linear editing computer |
CN1561144A (en) * | 2004-03-12 | 2005-01-05 | 陈健俊 | 3D8-X stero amplifying system |
CN1826572A (en) * | 2003-06-02 | 2006-08-30 | 迪斯尼实业公司 | System and method of programmatic window control for consumer video players |
CN101916095A (en) * | 2010-07-27 | 2010-12-15 | 北京水晶石数字科技有限公司 | Rehearsal performance control method |
WO2012013858A1 (en) * | 2010-07-30 | 2012-02-02 | Nokia Corporation | Method and apparatus for determining and equalizing one or more segments of a media track |
CN103338420A (en) * | 2013-05-29 | 2013-10-02 | 陈健俊 | Control method of panoramic space stereo sound |
-
2013
- 2013-12-31 CN CN201310754800.8A patent/CN104754178B/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1437137A (en) * | 2002-02-06 | 2003-08-20 | 北京新奥特集团 | Non-linear editing computer |
CN1826572A (en) * | 2003-06-02 | 2006-08-30 | 迪斯尼实业公司 | System and method of programmatic window control for consumer video players |
CN1561144A (en) * | 2004-03-12 | 2005-01-05 | 陈健俊 | 3D8-X stero amplifying system |
CN101916095A (en) * | 2010-07-27 | 2010-12-15 | 北京水晶石数字科技有限公司 | Rehearsal performance control method |
WO2012013858A1 (en) * | 2010-07-30 | 2012-02-02 | Nokia Corporation | Method and apparatus for determining and equalizing one or more segments of a media track |
CN103338420A (en) * | 2013-05-29 | 2013-10-02 | 陈健俊 | Control method of panoramic space stereo sound |
Also Published As
Publication number | Publication date |
---|---|
CN104754178A (en) | 2015-07-01 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN104754178B (en) | audio control method | |
CN104754186B (en) | Apparatus control method | |
CN108021714A (en) | A kind of integrated contribution editing system and contribution edit methods | |
CN106937022B (en) | multi-professional collaborative editing and control method for audio, video, light and machinery | |
US9142259B2 (en) | Editing device, editing method, and program | |
CN104750059B (en) | Lamp light control method | |
CN104750058B (en) | Panorama multi-channel audio control method | |
US10242712B2 (en) | Video synchronization based on audio | |
CN110139122A (en) | System and method for media distribution and management | |
CN102419997A (en) | Sound processing device, sound data selecting method and sound data selecting program | |
CN106937021B (en) | performance integrated control method based on time axis multi-track playback technology | |
CN104750051B (en) | Based on the panorama multi-channel audio control method for becoming the control of rail acoustic image | |
CN104754242B (en) | Based on the panorama multi-channel audio control method for becoming the processing of rail acoustic image | |
CN104754244B (en) | Panorama multi-channel audio control method based on variable domain audio-visual effects | |
CN104754243B (en) | Panorama multi-channel audio control method based on the control of variable domain acoustic image | |
CN102447842B (en) | Fast editing method and system supporting external medium selection, editing and uploading | |
CN103118322B (en) | A kind of surround sound audio-video processing system | |
CN104751869B (en) | Based on the panorama multi-channel audio control method for becoming the control of rail acoustic image | |
CN106937023B (en) | multi-professional collaborative editing and control method for film, television and stage | |
CN108073717A (en) | A kind of contribution editing machine based on control editor | |
CN104750055B (en) | Based on the panorama multi-channel audio control method for becoming rail audio-visual effects | |
CN104754241B (en) | Panorama multi-channel audio control method based on variable domain acoustic image | |
CN106937204B (en) | Panorama multichannel sound effect method for controlling trajectory | |
CN106937205B (en) | Complicated sound effect method for controlling trajectory towards video display, stage | |
CN104754447B (en) | Based on the link sound effect control method for becoming rail acoustic image |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
EXSB | Decision made by sipo to initiate substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |