CN109087260A - A kind of image processing method and device - Google Patents
A kind of image processing method and device Download PDFInfo
- Publication number
- CN109087260A CN109087260A CN201810864706.0A CN201810864706A CN109087260A CN 109087260 A CN109087260 A CN 109087260A CN 201810864706 A CN201810864706 A CN 201810864706A CN 109087260 A CN109087260 A CN 109087260A
- Authority
- CN
- China
- Prior art keywords
- rendering
- image
- region
- processed
- rendering region
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/80—Geometric correction
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10004—Still image; Photographic image
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Graphics (AREA)
- Eye Examination Apparatus (AREA)
- Processing Or Creating Images (AREA)
Abstract
The embodiment of the present application discloses a kind of image processing method and device, this method comprises: obtaining image to be processed corresponding with the field range of target object, wherein field range includes that binocular field of view is overlapped range and non-binocular field of view coincidence range;It is overlapped in range in binocular field of view, image procossing is carried out to image to be processed based on binocular gaze point;It is overlapped in range in non-binocular field of view, image procossing is carried out to image to be processed based on monocular blinkpunkt.Due to determining strategy using different blinkpunkt positions within sweep of the eye in different, so that identified each blinkpunkt position within the vision is all relatively more accurate.So that the quality for carrying out treated image to image to be processed based on determining blinkpunkt is improved.
Description
Technical field
This application involves field of image processing more particularly to a kind of blinkpunkt location determining method and devices.
Background technique
Virtual reality (virtual reality) technology refers to the modern high tech method for using computer technology as core
A kind of virtual environment is generated, user can carry out natural by special input-output apparatus with the object in virtual world
Interaction, to pass through acquisitions and the identical impressions of real world such as vision, the sense of hearing and tactile.Virtual environment is generated by computer
, dynamic virtual three-dimensional stereo-picture in real time, when user rotates head or eyeball, the position of blinkpunkt changes, thus
User can see the virtual three-dimensional stereo-picture of different directions.
Traditional blinkpunkt location determining method carries out the monocular that tracking is determined or based on user based on the binocular to user
Carry out tracking determination.But the position that this determining method will lead to blinkpunkt determines inaccuracy, further, may result in
The image obtained after being handled according to the position of the blinkpunkt image to be processed it is of low quality, influence user experience.
Summary of the invention
In order to solve in the prior art, the position of blinkpunkt determines that the problem of inaccuracy, the embodiment of the present application provide one kind
Image processing method and device.
In a first aspect, the embodiment of the present application provides a kind of image processing method, which comprises
Image to be processed corresponding with the field range of target object is obtained, wherein the field range includes binocular vision
Open country is overlapped range and non-binocular field of view is overlapped range;
It is overlapped in range in the binocular field of view, image procossing is carried out to the image to be processed based on binocular gaze point;
It is overlapped in range in the non-binocular field of view, the image to be processed is carried out at image based on monocular blinkpunkt
Reason.
Optionally, described to include: to the image progress image procossing to be processed based on the binocular gaze point
Multiple rendering regions of the image to be processed, difference rendering are determined according to the position coordinates of the binocular gaze point
The corresponding rendering degree in region is different;
Image rendering is carried out to the image to be processed according to the corresponding rendering degree in the multiple rendering region;
And/or
Carrying out image procossing to the image to be processed based on the monocular blinkpunkt includes:
Multiple rendering regions of the image to be processed, difference rendering are determined according to the position coordinates of the monocular blinkpunkt
The corresponding rendering degree in region is different;
Image rendering is carried out to the image to be processed according to the corresponding rendering degree in the multiple rendering region.
Optionally, the multiple rendering region includes at least following two of them or multiple:
First rendering region, the second rendering region, third rendering region and the 4th rendering region;
Wherein, first rendering region is using the position coordinates of the blinkpunkt as origin, the circle at the first default visual angle
Shape region;
Second rendering region is centered on the position coordinates of the blinkpunkt, the border circular areas at the second default visual angle
Remove the region in first rendering region;
Third rendering region is centered on the position coordinates of the blinkpunkt, the border circular areas at the default visual angle of third
Remove the region in the first rendering region and second rendering region;
4th rendering region is to be removed with the border circular areas at the position coordinates center of the blinkpunkt, the 4th default visual angle
Go to the region in first rendering region, the second rendering region and third rendering region.
Optionally,
The first default visual angle is 1.5 degree of visual angles;
The second default visual angle is 35 degree of visual angles;
It is 60 degree of visual angles that the third, which presets visual angle,;
The 4th default visual angle is 110 degree of visual angles.
Optionally, described that image is carried out to the image to be processed according to the corresponding rendering degree in the multiple rendering region
Rendering includes:
100% image rendering is carried out to the corresponding image to be processed in first rendering region;
50% image rendering is carried out to the corresponding image to be processed in second rendering region;
25% image rendering is carried out to the corresponding image to be processed in third rendering region;
2% image rendering is carried out to the corresponding image to be processed in the 4th rendering region.
Second aspect, the embodiment of the present application provide a kind of image processing apparatus, and described device includes:
Acquiring unit, for obtaining target object in corresponding image to be processed within sweep of the eye, the field range packet
It includes binocular field of view and is overlapped range and non-binocular field of view coincidence range;
First processing units, for being overlapped in range in the binocular field of view, based on binocular gaze point to described to be processed
Image carries out image procossing;
The second processing unit, for being overlapped in range in the non-binocular field of view, based on monocular blinkpunkt to described wait locate
It manages image and carries out image procossing.
It is optionally, described that image procossing is carried out to the image to be processed based on the binocular gaze point, comprising:
Multiple rendering regions of the image to be processed, difference rendering are determined according to the position coordinates of the binocular gaze point
The corresponding rendering degree in region is different;
Image rendering is carried out to the image to be processed according to the corresponding rendering degree in the multiple rendering region;
And/or
Carrying out image procossing to the image to be processed based on the monocular blinkpunkt includes:
Multiple rendering regions of the image to be processed, difference rendering are determined according to the position coordinates of the monocular blinkpunkt
The corresponding rendering degree in region is different;
Image rendering is carried out to the image to be processed according to the corresponding rendering degree in the multiple rendering region.
Optionally, the multiple rendering region includes at least following two of them or multiple:
First rendering region, the second rendering region, third rendering region and the 4th rendering region;
Wherein, first rendering region is using the position coordinates of the blinkpunkt as origin, the circle at the first default visual angle
Shape region;
Second rendering region is centered on the position coordinates of the blinkpunkt, the border circular areas at the second default visual angle
Remove the region in first rendering region;
Third rendering region is centered on the position coordinates of the blinkpunkt, the border circular areas at the default visual angle of third
Remove the region in the first rendering region and second rendering region;
4th rendering region is to be removed with the border circular areas at the position coordinates center of the blinkpunkt, the 4th default visual angle
Go to the region in first rendering region, the second rendering region and third rendering region.
Optionally,
The first default visual angle is 1.5 degree of visual angles;
The second default visual angle is 35 degree of visual angles;
It is 60 degree of visual angles that the third, which presets visual angle,;
The 4th default visual angle is 110 degree of visual angles.
Optionally, the rendering unit is specifically used for:
100% image rendering is carried out to the corresponding image to be processed in first rendering region;
50% image rendering is carried out to the corresponding image to be processed in second rendering region;
25% image rendering is carried out to the corresponding image to be processed in third rendering region;
2% image rendering is carried out to the corresponding image to be processed in the 4th rendering region.
Compared with prior art, the embodiment of the present application has the advantages that
The method and device of image procossing provided by the embodiments of the present application, this method comprises: obtaining the view with target object
The corresponding image to be processed of wild range, wherein the field range includes that binocular field of view is overlapped range and non-binocular field of view is overlapped
Range;It is overlapped in range in the binocular field of view, image procossing is carried out to the image to be processed based on binocular gaze point;Institute
It states non-binocular field of view to be overlapped in range, image procossing is carried out to the image to be processed based on monocular blinkpunkt.It can be seen that adopting
With the method and device of image procossing provided by the embodiments of the present application, point is watched attentively using different within sweep of the eye in different
It sets and determines strategy, so that identified each blinkpunkt position within the vision is all relatively more accurate.It is based further on determining
Blinkpunkt handles image to be processed, so as to the quality for the image that improves that treated, to improve user experience.
Detailed description of the invention
In order to illustrate the technical solutions in the embodiments of the present application or in the prior art more clearly, to embodiment or will show below
There is attached drawing needed in technical description to be briefly described, it should be apparent that, the accompanying drawings in the following description is only this
The some embodiments recorded in application, for those of ordinary skill in the art, without creative efforts,
It can also be obtained according to these attached drawings other attached drawings.
Fig. 1 is a kind of flow diagram of image processing method provided by the embodiments of the present application;
Fig. 2 is a kind of schematic diagram of the field range of target object provided by the embodiments of the present application;
Fig. 3 is a kind of flow diagram of the method for image procossing provided by the embodiments of the present application;
Fig. 4 is the schematic diagram in multiple rendering regions provided by the embodiments of the present application;
Fig. 5 is a kind of structural block diagram of image processing apparatus provided by the embodiments of the present application.
Specific embodiment
In order to make those skilled in the art more fully understand application scheme, below in conjunction in the embodiment of the present application
Attached drawing, the technical scheme in the embodiment of the application is clearly and completely described, it is clear that described embodiment is only this
Apply for a part of the embodiment, instead of all the embodiments.Based on the embodiment in the application, those of ordinary skill in the art exist
Every other embodiment obtained under the premise of creative work is not made, shall fall in the protection scope of this application.
Inventor has found under study for action, in virtual reality technology, traditional blinkpunkt location determining method be based on to
The binocular at family carries out tracking and is determined or based on the monocular of user carrying out tracking determination.
Specifically, if carrying out tracking determining blinkpunkt position based on the binocular to user, i.e., according to the left eye of user and the right side
Eye carries out tracking determining blinkpunkt position, then when determining blinkpunkt position, a weight can be respectively set for left eye and right eye,
In general, the weight of the weight ratio left eye of right eye is high, so, binocular vision is subtracted when blinkpunkt is located at left eye perspective range
When open country is overlapped the range of range, the position of identified blinkpunkt is inaccurate.
If carrying out tracking determining blinkpunkt position based on the monocular to user, i.e., carried out according to the left eye of user or right eye
It tracks and determines blinkpunkt position.So, when carrying out tracking determining blinkpunkt position according to the left eye of user, when blinkpunkt is located at
When right eye perspective range subtracts the range of binocular field of view coincidence range, the position of identified blinkpunkt is inaccurate.In basis
When the right eye of user carries out tracking determining blinkpunkt position, binocular field of view coincidence model is subtracted when blinkpunkt is located at left eye perspective range
When the range enclosed, the position of identified blinkpunkt is inaccurate.
Further, it may result in the image obtained after handling according to the position of the blinkpunkt image to be processed
It is of low quality, influence user experience.
In consideration of it, in the embodiment of the present application, image to be processed corresponding with the field range of target object is obtained,
Described in field range include that binocular field of view is overlapped range and non-binocular field of view and is overlapped range;Range is overlapped in the binocular field of view
It is interior, image procossing is carried out to the image to be processed based on binocular gaze point;It is overlapped in range, is based in the non-binocular field of view
Monocular blinkpunkt carries out image procossing to the image to be processed.It can be seen that at using image provided by the embodiments of the present application
The method and device of reason determines strategy using different blinkpunkt positions within sweep of the eye in different, so that identified each
A blinkpunkt position within the vision is all relatively more accurate.Be based further on determining blinkpunkt to image to be processed at
Reason, so as to the quality for the image that improves that treated, to improve user experience.
Embodiment of the method
Referring to Fig. 1, which is a kind of flow diagram of image processing method provided by the embodiments of the present application.
Image processing method provided in this embodiment, such as can be realized with S101-S103 as follows.
S101: obtaining target object in corresponding image to be processed, the field range within sweep of the eye includes binocular vision
Open country is overlapped range and non-binocular field of view is overlapped range.
It should be noted that the target object referred in the embodiment of the present application, can be the use using virtual reality device
Family.
It should be noted that binocular refers to the left eye and right eye of target object in being described below of the embodiment of the present application,
Monocular refers to the left eye or right eye of target object.
It should be noted that the embodiment of the present invention is not specifically limited the image to be processed, the image to be processed
It can be the various images generated using Virtual Environment.
It is understood that the field range of people is limited, that is to say, that target object according to the physiological characteristic of people
Field range be limited.It is the image that target user is able to observe that in the image within the vision of target object, and
Image except the field range of target object, target user do not observe then.
It should be noted that the binocular field of view referred in the present embodiment, which is overlapped range, refers to that left eye and right eye can be seen
The field range observed.Non- binocular field of view is overlapped range and refers to that the whole visual field range of target object subtracts the binocular field of view
It is overlapped the range of range.It is illustrated in combination with Fig. 2, Fig. 2 shows the field range of target object, wherein non-shaded portion
201 are overlapped range for binocular field of view, and dash area includes 202 and 203, are that non-binocular field of view is overlapped range.
It should be noted that the image to be processed can first be obtained in specific implementation by obtaining the image to be processed
Image data, to obtain the image to be processed according to described image data.
S102: it is overlapped in range in the binocular field of view, the image to be processed is handled based on binocular gaze point.
It should be noted that the blinkpunkt referred in the embodiment of the present application, during referring to visual perception, pair of sight alignment
The certain point of elephant.
It should be noted that in the embodiment of the present application, the binocular gaze point can be to be obtained in advance, and the application is real
Apply example and do not limit the specific implementation for obtaining the binocular gaze point specifically, as an example, can use optical system,
MEMS, capacitance sensor, any one or more device in myoelectricity current sensor obtain the binocular gaze point.
It is understood that binocular field of view be overlapped range refer to, the field range that binocular is observed that, i.e. left eye and
The field range that right eye is observed that.Therefore, it is overlapped in range in the binocular field of view, it is believed that the binocular gaze point
It is accurate.Therefore, image to be processed can be handled based on the binocular gaze point.
S103: it is overlapped in range in the non-binocular field of view, figure is carried out to the image to be processed based on monocular blinkpunkt
As processing.
It should be noted that similar with the binocular gaze point, in the embodiment of the present application, the monocular blinkpunkt can be with
It obtains in advance, the embodiment of the present application does not limit the specific implementation for obtaining the monocular blinkpunkt specifically, as one kind
Example can use optical system, MEMS, capacitance sensor, any one or more device in myoelectricity current sensor
Part obtains the monocular blinkpunkt.
Refer to it is understood that non-binocular field of view is overlapped range, whole visual field range subtracts the binocular field of view and is overlapped
The range of range.
It is understood that when the non-binocular field of view is overlapped in range, the binocular gaze point may be inaccurate and
Monocular blinkpunkt may be considered accurately, therefore, can be handled based on the monocular blinkpunkt image to be processed.
It can be seen that being used within sweep of the eye using the method for image procossing provided by the embodiments of the present application in different
Different blinkpunkt positions determines strategy, so that identified each blinkpunkt position within the vision is all relatively more accurate.Into
One step is handled image to be processed based on determining blinkpunkt, so as to the quality for the image that improves that treated, thus
Improve user experience.
It is understood that it includes the first field range and the second field range, institute that the non-binocular field of view, which is overlapped range,
Stating the first field range is the range that left eye perspective range subtracts that the binocular field of view is overlapped range, and second field range is
Right eye perspective range subtracts the range that the binocular field of view is overlapped range.Specifically, can be understood in conjunction with Fig. 2, in Fig. 2
It includes the first field range 202 and the second field range 203 that non-binocular field of view, which is overlapped range,.
Correspondingly, in the embodiment of the present application, when the binocular gaze point of the target object is in the non-binocular field of view weight
It closes in range, tracks mode from binocular tracking pattern switching for monocular and existed with the monocular blinkpunkt for obtaining the target object
It, can be with when specific implementation are as follows:
Within sweep of the eye described first, determine the monocular blinkpunkt described to be processed according to the position of left eye eyeball
Position coordinates in image;Within sweep of the eye described second, determine that the monocular blinkpunkt exists according to the position of right eye eyeball
Position coordinates in the image to be processed.
It is understood that the first field range 202 is the field range that left eye is observed that, it is that right eye can not be observed
Therefore the field range arrived within sweep of the eye first, determines the monocular blinkpunkt described according to the position of left eye eyeball
Position coordinates in image to be processed can make position coordinates of the blinkpunkt in the image to be processed than calibrated
Really.
Second field range 203 is the field range that right eye is observed that, is the field range that left eye can not be observed.
Therefore, within sweep of the eye second, position of the monocular blinkpunkt in the image to be processed is determined according to the position of right eye eyeball
Set coordinate.Position coordinates of the blinkpunkt in the image to be processed can be made more accurate.
It should be noted that in the embodiment of the present application, carrying out image procossing to the image to be processed includes to described
Image to be processed is rendered.Specifically, according to the binocular gaze point or the monocular blinkpunkt to the image to be processed
It is rendered, can be realized with S301-S302 as follows.
S301: the image to be processed is determined according to the position coordinates of the binocular gaze point or the monocular blinkpunkt
Multiple rendering regions, the corresponding rendering degree in difference rendering region are different.
It should be noted that the embodiment of the present application to the position coordinates without limitation, the position coordinates can be
The position coordinates in three-dimensional coordinate system pre-established are also possible to the position coordinates of other forms, the position coordinates
It can be determines according to actual conditions.
It should be noted that the embodiment of the present application to the number in the rendering region without limitation, the rendering region
Number can be determined specifically according to the actual situation.
In one possible implementation, the multiple rendering region includes at least the first rendering region, the second rendering
Any two or multiple in region, third rendering region and the 4th rendering region.
Wherein, first rendering region is using the position coordinates of the blinkpunkt as origin, the circle at the first default visual angle
Shape region;
Second rendering region is centered on the position coordinates of the blinkpunkt, the border circular areas at the second default visual angle
Remove the region in first rendering region.
Third rendering region is centered on the position coordinates of the blinkpunkt, the border circular areas at the default visual angle of third
Remove the region in the first rendering region and second rendering region.
4th rendering region is centered on the position coordinates of the blinkpunkt, the border circular areas at the 4th default visual angle
Remove the region in first rendering region, the second rendering region and third rendering region.
Specifically, can be understood in conjunction with Fig. 4.Fig. 4 is the multiple rendering regions of one kind provided by the embodiments of the present application
Schematic diagram.
It is blinkpunkt in Fig. 4, origin O, wherein the border circular areas where 410 is the annulus where the first rendering region, 420
Region is that the circle ring area where the second rendering region, 430 is that third renders region, the circle ring area at 440 places is the 4th wash with watercolours
Contaminate region.
It should be noted that the embodiment of the present application presets visual angle to the described first default visual angle, the second default visual angle, third
It is not specifically limited with the 4th default visual angle.The first default visual angle, the second default visual angle, third preset visual angle and the 4th in advance
If visual angle can specifically limit according to the actual situation.
Inventor has found under study for action, and in real life, when people observes things, central field of vision is in 1.5 degree of visual angles,
That is being look in 1.5 degree of visuals field a little, observed things is clearest.It is look in 35 degree of visual angles a little, it can be with
Perceive the color of things to be seen, that is to say, that be look in 35 degree of visual angles a little, observed things is also apparent.It is infusing
In 60 degree of visual angles of viewpoint, stereoscopic vision can produce, that is, be look in 60 degree of visual angles a little, observed things is more clear
It is clear.In 110 degree of angulars field of view of blinkpunkt, it is the monocular absolute visual field range of human eye, that is, is look at 110 degree of angulars field of view a little
Interior, observed object is in the borderline region of field range, is that comparison is fuzzy.
In consideration of it, determining the multiple rendering region the characteristics of the visual field when can observe things according to people.In the application reality
It applies in a kind of possible implementation of example, the first default visual angle can be 1.5 degree of visual angles, and the second default visual angle can
Think 35 degree of visual angles, the third, which presets visual angle, to be 60 degree of visual angles, and the 4th default visual angle can be 110 degree of visual angles.
S302: image rendering is carried out to the image to be processed according to the corresponding rendering degree in the multiple rendering region.
It should be noted that the embodiment of the present application is not specifically limited the rendering degree, the rendering degree can be with
Specific setting according to the actual situation.
It is understood that in virtual reality technology, in order to enable user can carry out with the object in virtual world
Naturally interaction, to pass through acquisitions and the identical impressions of real world such as vision, the sense of hearing and tactile.So virtual reality skill
Art is the virtual three-dimensional stereo-picture that user shows, to be matched with the visual experience of user in real life.
As described above it is found that things observed by people is clearest, i.e. target object institute in the first rendering region
The image to be processed of observation is clearest in the first rendering region.In the second rendering region, things observed by people is
It is apparent, i.e., image to be processed observed by target object is apparent in the second rendering region.Area is rendered in third
In domain, things observed by people is than more visible, i.e., image to be processed observed by target object compares in third rendering region
Clearly.In the 4th rendering region, things observed by people is relatively fuzzy, i.e., image to be processed observed by target object exists
It is relatively fuzzyyer in 4th rendering region.
In consideration of it, in the embodiment of the present application, step S302 in specific implementation, can be to first rendering region pair
The image to be processed answered carries out 100% image rendering;50% is carried out to the corresponding image to be processed in second rendering region
Image rendering;25% image rendering is carried out to the corresponding image to be processed in third rendering region;To the 4th wash with watercolours
Contaminate the image rendering that the corresponding image to be processed in region carries out 2%.So that the image and user after rendering are in actual life
In visual experience match, bring good user experience to user.
As it was noted above, the field range of target object is limited.In the image within the vision of target object, it is
The image that target user is able to observe that, and the image except the field range of target object, target user do not observe then.
In consideration of it, step S301 in specific implementation, can be according to described in a kind of possible implementation of the embodiment of the present application
The position coordinates of blinkpunkt and the distance between the blinkpunkt and visual field boundary determine multiple wash with watercolours of the image to be processed
Contaminate region.
Visual field boundary mentioned herein, refers to, target object it is observed that image to be processed boundary.
It is understood that there are individual difference, i.e., the field range of different people may be different in real life,
Therefore, the corresponding visual field boundary of different people is also different.It therefore, can be in a kind of possible realization of the embodiment of the present application
The field range of the target object is obtained in advance, so that it is determined that the visual field boundary.Specifically, it can use correlation-measuring instrument
Device measures the field range of the target object, so that it is determined that the visual field boundary.Also it can receive the view of target object input
Wild range, so that it is determined that visual field boundary.
According to the position coordinates of the blinkpunkt and the distance between the blinkpunkt and visual field boundary, determine it is described to
The multiple rendering regions for handling image, at least may include following several situations.
The first situation: if the position coordinates of the blinkpunkt and the distance between the blinkpunkt and visual field boundary are big
In or equal to first distance, then the rendering region of the image to be processed is determined as the first rendering region, the second rendering area
Domain, third rendering region and the 4th rendering region.Wherein, first distance refers to, with the position coordinates center of the blinkpunkt,
The radius of the border circular areas at four default visual angles.
It can be understood that.Since blinkpunkt is greater than first distance, i.e., the first rendering region, the at a distance from the boundary of the visual field
Two rendering regions, third rendering region and the 4th render region within the boundary of the visual field, so, by the image to be processed
Rendering region be determined as the first rendering region, second rendering region, third rendering region and the 4th rendering region.
Second situation: if the position coordinates of the blinkpunkt and the distance between the blinkpunkt and visual field boundary are small
In first distance, and it is more than or equal to second distance, then the rendering region of the image to be processed is determined as the first rendering
Region, the second rendering region and third render region.Wherein, second distance refers to, with the position coordinates center of the blinkpunkt,
Third presets the radius of the border circular areas at visual angle.
It is understood that since blinkpunkt is less than first distance at a distance from the boundary of the visual field, and it is more than or equal to the
Two distances, i.e., the first rendering region, the second rendering region and third rendering region are within the boundary of the visual field, and the 4th rendering area
Domain not within the boundary of the visual field, so, the rendering region of the image to be processed will be determined as the first rendering region, the second wash with watercolours
It contaminates region and third renders region.
The third situation: if the position coordinates of the blinkpunkt and the distance between the blinkpunkt and visual field boundary are small
In second distance, and it is more than or equal to third distance, then the rendering region of the image to be processed is determined as the first rendering
Region and the second rendering region.Wherein, third distance refers to, with the position coordinates center of the blinkpunkt, the second default visual angle
Border circular areas radius.
It is understood that since blinkpunkt is less than second distance at a distance from the boundary of the visual field, and it is more than or equal to the
Three distances, i.e., the first rendering region and the second rendering region are within the boundary of the visual field, and third rendering region the 4th renders area
Domain not within the boundary of the visual field, so, the rendering region of the image to be processed will be determined as the first rendering region and second
Render area.
4th kind of situation: particularly, if the blinkpunkt very close to the visual field boundary, i.e., the position of the described blinkpunkt is sat
Mark and the distance between the blinkpunkt and visual field boundary are less than third distance, can be by the rendering area of the image to be processed
Domain is determined as the region in first rendering region within the visual field boundary.
It can be seen that using image processing method provided by the embodiments of the present application, it can be according to the blinkpunkt of target object
Position coordinates determine multiple and different rendering regions, and the rendering degree in different rendering regions is different, and is not pair
The virtual three-dimensional stereo-picture of the target object visual field in one's power is rendered.Rendering efficiency is improved, and reduces power consumption.
Installation practice
Based on the image processing method that above embodiments provide, the embodiment of the present application also provides a kind of image procossing dresses
It sets, is described in detail its working principle with reference to the accompanying drawing.
Referring to Fig. 5, which is a kind of structural block diagram of image processing apparatus provided by the embodiments of the present application.
Image processing apparatus 500 provided in this embodiment, such as may include: acquiring unit 510, first processing units
520 and the second processing unit 530.
Acquiring unit 510, for obtaining target object in corresponding image to be processed within sweep of the eye, the field range
Range is overlapped including binocular field of view and non-binocular field of view is overlapped range;
First processing units, for being overlapped in range in the binocular field of view, based on binocular gaze point to described to be processed
Image carries out image procossing;
The second processing unit, for being overlapped in range in the non-binocular field of view, based on monocular blinkpunkt to described wait locate
It manages image and carries out image procossing.
It is optionally, described that image procossing is carried out to the image to be processed, comprising::
The multiple of the image to be processed are determined according to the position coordinates of the binocular gaze point or the monocular blinkpunkt
Region is rendered, the corresponding rendering degree in difference rendering region is different;
Image rendering is carried out to the image to be processed according to the corresponding rendering degree in the multiple rendering region.
Optionally, the multiple rendering region includes at least following two of them or multiple:
First rendering region, the second rendering region, third rendering region and the 4th rendering region;
Wherein, first rendering region is using the position coordinates of the blinkpunkt as origin, the circle at the first default visual angle
Shape region;
Second rendering region is centered on the position coordinates of the blinkpunkt, the border circular areas at the second default visual angle
Remove the region in first rendering region;
Third rendering region is centered on the position coordinates of the blinkpunkt, the border circular areas at the default visual angle of third
Remove the region in the first rendering region and second rendering region;
4th rendering region is to be removed with the border circular areas at the position coordinates center of the blinkpunkt, the 4th default visual angle
Go to the region in first rendering region, the second rendering region and third rendering region.
Optionally,
The first default visual angle is 1.5 degree of visual angles;
The second default visual angle is 35 degree of visual angles;
It is 60 degree of visual angles that the third, which presets visual angle,;
The 4th default visual angle is 110 degree of visual angles.
Optionally, the rendering unit is specifically used for:
100% image rendering is carried out to the corresponding image to be processed in first rendering region;
50% image rendering is carried out to the corresponding image to be processed in second rendering region;
25% image rendering is carried out to the corresponding image to be processed in third rendering region;
2% image rendering is carried out to the corresponding image to be processed in the 4th rendering region.
Since the blinkpunkt location determining method that the blinkpunkt position determining means are with above embodiments provide is corresponding
Device can be with reference to related content in above method embodiment so the specific implementation of each unit for described device
Description, details are not described herein again.
It can be seen that being used within sweep of the eye using the device of image procossing provided by the embodiments of the present application in different
Different blinkpunkt positions determines strategy, so that identified each blinkpunkt position within the vision is all relatively more accurate.Into
One step is handled image to be processed based on determining blinkpunkt, so as to the quality for the image that improves that treated, thus
Improve user experience.
When introducing the element of various embodiments of the application, the article " one ", "one", " this " and " described " be intended to
Indicate one or more elements.Word "include", "comprise" and " having " are all inclusive and mean in addition to listing
Except element, there can also be other elements.
It should be noted that those of ordinary skill in the art will appreciate that realizing the whole in above method embodiment or portion
Split flow is relevant hardware can be instructed to complete by computer program, and the program can be stored in a computer
In read/write memory medium, the program is when being executed, it may include such as the process of above-mentioned each method embodiment.Wherein, the storage
Medium can be magnetic disk, CD, read-only memory (Read-Only Memory, ROM) or random access memory (Random
Access Memory, RAM) etc..
All the embodiments in this specification are described in a progressive manner, same and similar portion between each embodiment
Dividing may refer to each other, and each embodiment focuses on the differences from other embodiments.Especially for device reality
For applying example, since it is substantially similar to the method embodiment, so describing fairly simple, related place is referring to embodiment of the method
Part explanation.The apparatus embodiments described above are merely exemplary, wherein described be used as separate part description
Unit and module may or may not be physically separated.Furthermore it is also possible to select it according to the actual needs
In some or all of unit and module achieve the purpose of the solution of this embodiment.Those of ordinary skill in the art are not paying
In the case where creative work, it can understand and implement.
The above is only the specific embodiment of the application, it is noted that for the ordinary skill people of the art
For member, under the premise of not departing from the application principle, several improvements and modifications can also be made, these improvements and modifications are also answered
It is considered as the protection scope of the application.
Claims (10)
1. a kind of image processing method, which is characterized in that the described method includes:
Image to be processed corresponding with the field range of target object is obtained, wherein the field range includes binocular field of view weight
It closes range and non-binocular field of view is overlapped range;
It is overlapped in range in the binocular field of view, image procossing is carried out to the image to be processed based on binocular gaze point;
It is overlapped in range in the non-binocular field of view, image procossing is carried out to the image to be processed based on monocular blinkpunkt.
2. the method according to claim 1, wherein described be based on the binocular gaze point to the figure to be processed
Include: as carrying out image procossing
Multiple rendering regions of the image to be processed, difference rendering region are determined according to the position coordinates of the binocular gaze point
Corresponding rendering degree is different;
Image rendering is carried out to the image to be processed according to the corresponding rendering degree in the multiple rendering region;
And/or
Carrying out image procossing to the image to be processed based on the monocular blinkpunkt includes:
Multiple rendering regions of the image to be processed, difference rendering region are determined according to the position coordinates of the monocular blinkpunkt
Corresponding rendering degree is different;
Image rendering is carried out to the image to be processed according to the corresponding rendering degree in the multiple rendering region.
3. according to the method described in claim 2, it is characterized in that, the multiple rendering region includes at least following two of them
Or it is multiple:
First rendering region, the second rendering region, third rendering region and the 4th rendering region;
Wherein, first rendering region is using the position coordinates of the blinkpunkt as origin, the circle at the first default visual angle
Domain;
Second rendering region is centered on the position coordinates of the blinkpunkt, the removing of the border circular areas at the second default visual angle
The region in first rendering region;
Third rendering region is centered on the position coordinates of the blinkpunkt, the border circular areas removing at the default visual angle of third
The region in the first rendering region and second rendering region;
The 4th rendering region is with the border circular areas at the position coordinates center of the blinkpunkt, the 4th default visual angle removing institute
State the region in the first rendering region, the second rendering region and third rendering region.
4. according to the method described in claim 3, it is characterized in that,
The first default visual angle is 1.5 degree of visual angles;
The second default visual angle is 35 degree of visual angles;
It is 60 degree of visual angles that the third, which presets visual angle,;
The 4th default visual angle is 110 degree of visual angles.
5. the method according to claim 3 or 4, which is characterized in that described according to the corresponding wash with watercolours in the multiple rendering region
Dye degree carries out image rendering to the image to be processed
100% image rendering is carried out to the corresponding image to be processed in first rendering region;
50% image rendering is carried out to the corresponding image to be processed in second rendering region;
25% image rendering is carried out to the corresponding image to be processed in third rendering region;
2% image rendering is carried out to the corresponding image to be processed in the 4th rendering region.
6. a kind of image processing apparatus, which is characterized in that described device includes:
Acquiring unit, for obtaining target object in corresponding image to be processed within sweep of the eye, the field range includes double
It is visually wild to be overlapped range and non-binocular field of view coincidence range;
First processing units, for being overlapped in range in the binocular field of view, based on binocular gaze point to the image to be processed
Carry out image procossing;
The second processing unit is overlapped in range in the non-binocular field of view, based on monocular blinkpunkt to the image to be processed into
Row image procossing.
7. device according to claim 6, which is characterized in that described to be based on the binocular gaze point to the figure to be processed
As carrying out image procossing, comprising:
Multiple rendering regions of the image to be processed, difference rendering region are determined according to the position coordinates of the binocular gaze point
Corresponding rendering degree is different;
Image rendering is carried out to the image to be processed according to the corresponding rendering degree in the multiple rendering region;
And/or
Carrying out image procossing to the image to be processed based on the monocular blinkpunkt includes:
Multiple rendering regions of the image to be processed, difference rendering region are determined according to the position coordinates of the monocular blinkpunkt
Corresponding rendering degree is different;
Image rendering is carried out to the image to be processed according to the corresponding rendering degree in the multiple rendering region.
8. device according to claim 7, which is characterized in that the multiple rendering region includes at least following two of them
Or it is multiple:
First rendering region, the second rendering region, third rendering region and the 4th rendering region;
Wherein, first rendering region is using the position coordinates of the blinkpunkt as origin, the circle at the first default visual angle
Domain;
Second rendering region is centered on the position coordinates of the blinkpunkt, the removing of the border circular areas at the second default visual angle
The region in first rendering region;
Third rendering region is centered on the position coordinates of the blinkpunkt, the border circular areas removing at the default visual angle of third
The region in the first rendering region and second rendering region;
The 4th rendering region is with the border circular areas at the position coordinates center of the blinkpunkt, the 4th default visual angle removing institute
State the region in the first rendering region, the second rendering region and third rendering region.
9. according to the method described in claim 8, it is characterized in that,
The first default visual angle is 1.5 degree of visual angles;
The second default visual angle is 35 degree of visual angles;
It is 60 degree of visual angles that the third, which presets visual angle,;
The 4th default visual angle is 110 degree of visual angles.
10. method according to claim 8 or claim 9, which is characterized in that described according to the corresponding wash with watercolours in the multiple rendering region
Dye degree carries out image rendering to the image to be processed
100% image rendering is carried out to the corresponding image to be processed in first rendering region;
50% image rendering is carried out to the corresponding image to be processed in second rendering region;
25% image rendering is carried out to the corresponding image to be processed in third rendering region;
2% image rendering is carried out to the corresponding image to be processed in the 4th rendering region.
Priority Applications (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201810864706.0A CN109087260A (en) | 2018-08-01 | 2018-08-01 | A kind of image processing method and device |
| PCT/CN2019/077304 WO2020024593A1 (en) | 2018-08-01 | 2019-03-07 | Method and device for image processing |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201810864706.0A CN109087260A (en) | 2018-08-01 | 2018-08-01 | A kind of image processing method and device |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| CN109087260A true CN109087260A (en) | 2018-12-25 |
Family
ID=64831274
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN201810864706.0A Pending CN109087260A (en) | 2018-08-01 | 2018-08-01 | A kind of image processing method and device |
Country Status (2)
| Country | Link |
|---|---|
| CN (1) | CN109087260A (en) |
| WO (1) | WO2020024593A1 (en) |
Cited By (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN110378914A (en) * | 2019-07-22 | 2019-10-25 | 北京七鑫易维信息技术有限公司 | Rendering method and device, system, display equipment based on blinkpunkt information |
| WO2020024593A1 (en) * | 2018-08-01 | 2020-02-06 | 北京七鑫易维信息技术有限公司 | Method and device for image processing |
| WO2020215960A1 (en) * | 2019-04-24 | 2020-10-29 | 京东方科技集团股份有限公司 | Method and device for determining area of gaze, and wearable device |
| CN112465939A (en) * | 2020-11-25 | 2021-03-09 | 上海哔哩哔哩科技有限公司 | Panoramic video rendering method and system |
Citations (7)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20150279022A1 (en) * | 2014-03-31 | 2015-10-01 | Empire Technology Development Llc | Visualization of Spatial and Other Relationships |
| CN105425399A (en) * | 2016-01-15 | 2016-03-23 | 中意工业设计(湖南)有限责任公司 | Method for rendering user interface of head-mounted equipment according to human eye vision feature |
| CN106327584A (en) * | 2016-08-24 | 2017-01-11 | 上海与德通讯技术有限公司 | Image processing method and device for virtual reality equipment |
| CN106485790A (en) * | 2016-09-30 | 2017-03-08 | 珠海市魅族科技有限公司 | Method and device that a kind of picture shows |
| CN106570923A (en) * | 2016-09-27 | 2017-04-19 | 乐视控股(北京)有限公司 | Frame rendering method and device |
| US20170285735A1 (en) * | 2016-03-31 | 2017-10-05 | Sony Computer Entertainment Inc. | Reducing rendering computation and power consumption by detecting saccades and blinks |
| CN108287678A (en) * | 2018-03-06 | 2018-07-17 | 京东方科技集团股份有限公司 | A kind of image processing method, device, equipment and medium based on virtual reality |
Family Cites Families (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN105282532B (en) * | 2014-06-03 | 2018-06-22 | 天津拓视科技有限公司 | 3D display method and apparatus |
| US10176639B2 (en) * | 2014-11-27 | 2019-01-08 | Magic Leap, Inc. | Virtual/augmented reality system having dynamic region resolution |
| CN109087260A (en) * | 2018-08-01 | 2018-12-25 | 北京七鑫易维信息技术有限公司 | A kind of image processing method and device |
-
2018
- 2018-08-01 CN CN201810864706.0A patent/CN109087260A/en active Pending
-
2019
- 2019-03-07 WO PCT/CN2019/077304 patent/WO2020024593A1/en not_active Ceased
Patent Citations (7)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20150279022A1 (en) * | 2014-03-31 | 2015-10-01 | Empire Technology Development Llc | Visualization of Spatial and Other Relationships |
| CN105425399A (en) * | 2016-01-15 | 2016-03-23 | 中意工业设计(湖南)有限责任公司 | Method for rendering user interface of head-mounted equipment according to human eye vision feature |
| US20170285735A1 (en) * | 2016-03-31 | 2017-10-05 | Sony Computer Entertainment Inc. | Reducing rendering computation and power consumption by detecting saccades and blinks |
| CN106327584A (en) * | 2016-08-24 | 2017-01-11 | 上海与德通讯技术有限公司 | Image processing method and device for virtual reality equipment |
| CN106570923A (en) * | 2016-09-27 | 2017-04-19 | 乐视控股(北京)有限公司 | Frame rendering method and device |
| CN106485790A (en) * | 2016-09-30 | 2017-03-08 | 珠海市魅族科技有限公司 | Method and device that a kind of picture shows |
| CN108287678A (en) * | 2018-03-06 | 2018-07-17 | 京东方科技集团股份有限公司 | A kind of image processing method, device, equipment and medium based on virtual reality |
Cited By (6)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| WO2020024593A1 (en) * | 2018-08-01 | 2020-02-06 | 北京七鑫易维信息技术有限公司 | Method and device for image processing |
| WO2020215960A1 (en) * | 2019-04-24 | 2020-10-29 | 京东方科技集团股份有限公司 | Method and device for determining area of gaze, and wearable device |
| CN110378914A (en) * | 2019-07-22 | 2019-10-25 | 北京七鑫易维信息技术有限公司 | Rendering method and device, system, display equipment based on blinkpunkt information |
| CN112465939A (en) * | 2020-11-25 | 2021-03-09 | 上海哔哩哔哩科技有限公司 | Panoramic video rendering method and system |
| CN112465939B (en) * | 2020-11-25 | 2023-01-24 | 上海哔哩哔哩科技有限公司 | Panoramic video rendering method and system |
| US12400393B2 (en) | 2020-11-25 | 2025-08-26 | Shanghai Bilibili Technology Co., Ltd. | Method and system for rendering panoramic video |
Also Published As
| Publication number | Publication date |
|---|---|
| WO2020024593A1 (en) | 2020-02-06 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US11272990B2 (en) | System and method of utilizing three-dimensional overlays with medical procedures | |
| Liu et al. | Multiple-object tracking is based on scene, not retinal, coordinates. | |
| Rubin | The role of junctions in surface completion and contour matching | |
| Rolland et al. | Towards quantifying depth and size perception in virtual environments | |
| CN109087260A (en) | A kind of image processing method and device | |
| CN109644266B (en) | Stereoscopic visualization system capable of achieving depth perception of surgical area | |
| US20140285641A1 (en) | Three-dimensional display device, three-dimensional image processing device, and three-dimensional display method | |
| CN110321005B (en) | Method and device for improving virtual object display effect of AR equipment, AR equipment and storage medium | |
| Tas et al. | An object-mediated updating account of insensitivity to transsaccadic change | |
| CN106959759A (en) | A kind of data processing method and device | |
| WO2019152619A1 (en) | Blink-based calibration of an optical see-through head-mounted display | |
| US10901214B2 (en) | Method and device for controlling display of image and head-mounted display | |
| JP2022511571A (en) | Dynamic convergence adjustment for augmented reality headsets | |
| CN105992965A (en) | Stereoscopic display responsive to focal-point shift | |
| CN110378914A (en) | Rendering method and device, system, display equipment based on blinkpunkt information | |
| CN109656373A (en) | One kind watching independent positioning method and positioning device, display equipment and storage medium attentively | |
| CN111248851A (en) | Visual function self-testing method | |
| Hoffman et al. | Focus information is used to interpret binocular images | |
| Richards | Visual space perception | |
| KR20170111938A (en) | Apparatus and method for replaying contents using eye tracking of users | |
| JP2023515205A (en) | Display method, device, terminal device and computer program | |
| CN110433062B (en) | Visual function training system based on dynamic video images | |
| CN106851249A (en) | Image processing method and display device | |
| Hussain et al. | Modelling foveated depth-of-field blur for improving depth perception in virtual reality | |
| Greenwald et al. | A comparison of visuomotor cue integration strategies for object placement and prehension |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| RJ01 | Rejection of invention patent application after publication | ||
| RJ01 | Rejection of invention patent application after publication |
Application publication date: 20181225 |