CN106133794A - Information processing method, messaging device and program - Google Patents
Information processing method, messaging device and program Download PDFInfo
- Publication number
- CN106133794A CN106133794A CN201580013724.2A CN201580013724A CN106133794A CN 106133794 A CN106133794 A CN 106133794A CN 201580013724 A CN201580013724 A CN 201580013724A CN 106133794 A CN106133794 A CN 106133794A
- Authority
- CN
- China
- Prior art keywords
- image
- output
- editor
- smart phone
- change
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000003672 processing method Methods 0.000 title claims abstract description 24
- 230000010365 information processing Effects 0.000 title claims abstract description 21
- 230000008859 change Effects 0.000 claims abstract description 89
- 238000000034 method Methods 0.000 claims description 140
- 230000008569 process Effects 0.000 claims description 133
- 230000000007 visual effect Effects 0.000 claims description 71
- 238000012545 processing Methods 0.000 claims description 36
- 238000001514 detection method Methods 0.000 claims description 10
- 230000004048 modification Effects 0.000 claims description 5
- 238000012986 modification Methods 0.000 claims description 5
- 238000012360 testing method Methods 0.000 claims description 5
- 238000010586 diagram Methods 0.000 description 47
- 230000015572 biosynthetic process Effects 0.000 description 10
- 238000003786 synthesis reaction Methods 0.000 description 10
- 238000012937 correction Methods 0.000 description 9
- 230000003287 optical effect Effects 0.000 description 8
- 230000003321 amplification Effects 0.000 description 4
- 230000006870 function Effects 0.000 description 4
- 238000003199 nucleic acid amplification method Methods 0.000 description 4
- 230000002093 peripheral effect Effects 0.000 description 4
- 230000035807 sensation Effects 0.000 description 4
- 230000004075 alteration Effects 0.000 description 3
- 230000008030 elimination Effects 0.000 description 3
- 238000003379 elimination reaction Methods 0.000 description 3
- 230000009466 transformation Effects 0.000 description 3
- 239000013256 coordination polymer Substances 0.000 description 2
- 238000010304 firing Methods 0.000 description 2
- 238000004898 kneading Methods 0.000 description 2
- 238000002372 labelling Methods 0.000 description 2
- 239000000463 material Substances 0.000 description 2
- 238000005096 rolling process Methods 0.000 description 2
- 230000003068 static effect Effects 0.000 description 2
- 241000251468 Actinopterygii Species 0.000 description 1
- 101100129500 Caenorhabditis elegans max-2 gene Proteins 0.000 description 1
- 241000406668 Loxodonta cyclotis Species 0.000 description 1
- 230000001133 acceleration Effects 0.000 description 1
- 230000006399 behavior Effects 0.000 description 1
- 230000000295 complement effect Effects 0.000 description 1
- 239000002131 composite material Substances 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000001914 filtration Methods 0.000 description 1
- 229910052738 indium Inorganic materials 0.000 description 1
- 239000011159 matrix material Substances 0.000 description 1
- 229910044991 metal oxide Inorganic materials 0.000 description 1
- 150000004706 metal oxides Chemical class 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 230000011514 reflex Effects 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 238000012876 topography Methods 0.000 description 1
- 238000004148 unit process Methods 0.000 description 1
- 239000011800 void material Substances 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0487—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
- G06F3/0488—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/04—Context-preserving transformations, e.g. by using an importance map
- G06T3/047—Fisheye or wide-angle transformations
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0484—Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
- G06F3/04845—Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range for image manipulation, e.g. dragging, rotation, expansion or change of colour
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T11/00—2D [Two Dimensional] image generation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T11/00—2D [Two Dimensional] image generation
- G06T11/60—Editing figures and text; Combining figures or text
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/45—Cameras or camera modules comprising electronic image sensors; Control thereof for generating image signals from two or more image sensors being of different type or operating in different modes, e.g. with a CMOS sensor for moving images in combination with a charge-coupled device [CCD] for still images
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/62—Control of parameters via user interfaces
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/63—Control of cameras or camera modules by using electronic viewfinders
- H04N23/631—Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/66—Remote control of cameras or camera parts, e.g. by remote control devices
- H04N23/661—Transmitting camera control signals through networks, e.g. control via the Internet
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/698—Control of cameras or camera modules for achieving an enlarged field of view, e.g. panoramic image capture
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0481—Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/44—Arrangements for executing specific programs
- G06F9/451—Execution arrangements for user interfaces
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Human Computer Interaction (AREA)
- Signal Processing (AREA)
- Multimedia (AREA)
- Image Processing (AREA)
- Processing Or Creating Images (AREA)
- Studio Devices (AREA)
- Closed-Circuit Television Systems (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
A kind of for making the information processing method of computer disposal image, wherein, described image processing method makes described computer perform: obtaining step: obtain described image;Produce step: produce editor's image of the presumptive area editing described image and change the change image of described presumptive area to be output;And output step, output at least has described editor's image and the output image of described change image.
Description
Technical field
An aspect of of the present present invention relates at least one in information processing method, messaging device and program.
Background technology
For showing that the method for panoramic picture has been known.
For from user accept about the instruction showing panoramic picture in panoramic picture display user interface (its
Below it is referred to as " UI ") it has been known (see the open No.2011-076249 of such as Japanese patent application).
But, during tradition UI image display on smart phone etc., the function being used for rolling image is distributed to so-called
" dragging ", therefore, user is likely difficult to perform image manipulation (editor etc. of such as image).
Summary of the invention
According to an aspect of the present invention, it is provided that a kind of for making the information processing method of computer disposal image, wherein, institute
Stating image processing method makes described computer perform: obtaining step: obtain described image;Produce step: produce editor described
Editor's image of the presumptive area of image and change the change image of described presumptive area to be output;And output step:
Output at least has described editor's image and the output image of described change image.
According to a further aspect of the invention, it is provided that a kind of messaging device processing image, wherein, described image procossing sets
Having got everything ready: fetching portion, it obtains described image;Producing part, it produces editor's figure of the presumptive area editing described image
As and change the change image of described presumptive area to be output;And output part, its output at least has described editor
Image and the output image of described change image.
According to a further aspect of the invention, it is provided that a kind of for making the program of computer disposal image, wherein, described program
Described computer is performed: obtaining step: to obtain described image;Produce step: produce the presumptive area editing described image
Editor's image and change the change image of described presumptive area to be output;And output step: output at least has institute
State editor's image and the output image of described change image.
Accompanying drawing explanation
Fig. 1 is showing of an example of the configured in one piece illustrating image capturing apparatus according to an embodiment of the invention
Figure.
Fig. 2 A, Fig. 2 B and Fig. 2 C are the showing an of example illustrating image picking-up apparatus according to an embodiment of the invention
Figure.
Fig. 3 is an example of the image taking illustrating that image picking-up apparatus is carried out according to an embodiment of the invention
Diagram.
Fig. 4 A, Fig. 4 B and Fig. 4 C are to illustrate image captured by image picking-up apparatus according to an embodiment of the invention
The diagram of one example.
Fig. 5 is the frame of an example of the hardware configuration illustrating image picking-up apparatus according to an embodiment of the invention
Figure.
Fig. 6 is the block diagram of an example of the hardware configuration illustrating smart phone according to an embodiment of the invention.
Fig. 7 is the order of an example of the disposed of in its entirety illustrating image capturing apparatus according to an embodiment of the invention
Diagram.
Fig. 8 A, Fig. 8 B, Fig. 8 C and Fig. 8 D are the examples illustrating whole day ball image according to an embodiment of the invention
Diagram.
Fig. 9 is the diagram of the example illustrating whole day ball panoramic picture according to an embodiment of the invention.
Figure 10 A, Figure 10 B, Figure 10 C and Figure 10 D are for illustrating the one of initial pictures according to an embodiment of the invention
The diagram of individual example.
Figure 11 be illustrate according to an embodiment of the invention, defeated under the original state of the editor for performing image
Publish picture the diagram of an example of picture.
Figure 12 A, Figure 12 B and Figure 12 C are to edit region to be output according to an embodiment of the invention for illustrating
The diagram of one example.
Figure 13 A and Figure 13 B is to increase or reduce region to be output according to an embodiment of the invention for illustrating
The diagram of one example.
Figure 14 is the diagram for illustrating the example that another zoom processes according to an embodiment of the invention.
Figure 15 is the table for illustrating the example that another zoom processes according to an embodiment of the invention.
Figure 16 A, Figure 16 B, Figure 16 C, Figure 16 D and Figure 16 E are for illustrating that another becomes according to an embodiment of the invention
The diagram of one example of burnt " scope " processed.
Figure 17 A and Figure 17 B is for illustrating according to an embodiment of the invention based on editing image for presumptive area
The diagram of one example of performed editor.
Figure 18 is the flow chart of an example of the disposed of in its entirety illustrating smart phone according to an embodiment of the invention.
Figure 19 A and Figure 19 B is for illustrating that changing output according to an embodiment of the invention (such as changes the position of image
Put or direction) the diagram of an example.
Figure 20 is an example of the functional configuration for illustrating image capturing apparatus according to an embodiment of the invention
Function diagram.
Detailed description of the invention
The embodiment of the present invention explained below.
<first embodiment>
<configured in one piece of system>
Fig. 1 is showing of an example of the configured in one piece illustrating image capturing apparatus according to an embodiment of the invention
Figure.
Image capturing apparatus 10 has image picking-up apparatus 1 and smart phone 2.
Image picking-up apparatus 1 has multiple optical system, and produces and smart phone 2 is exported the widest scope
The captured image (it is referred to as " whole day ball image " following) in (all directions around such as image picking-up apparatus 1).
Image picking-up apparatus 1 explained below and the details of whole day ball image.Image handled by image capturing apparatus 10 is the most complete
Celestial sphere image.Such as, panoramic picture is whole day ball image.The example of whole day ball image will be described below.
Messaging device e.g. smart phone 2.As example, smart phone 2 will be described following.Smart phone 2 is
For making user operation from the equipment of the whole day ball image acquired in image picking-up apparatus 1.Smart phone 2 is intended that use
The equipment of the whole day ball image acquired in the output of family.The details of smart phone 2 will be described below.
Image picking-up apparatus 1 and smart phone 2 stand wired connection or wireless connections.Such as, smart phone 2 is clapped from image
Take the photograph equipment 1 download data (such as from the whole day ball image of image picking-up apparatus 1 output) and be entered into smart phone 2.
Here, connection can be performed by network.
Here, configured in one piece is not limited to the configuration shown in Fig. 1.Such as, image picking-up apparatus 1 and smart phone 2 can be
Integrated apparatus.Furthermore, it is possible to connect another computer in addition to image picking-up apparatus 1 and smart phone 2, in order to by three
Individual or more equipment are constituted.
<image picking-up apparatus>
Fig. 2 A, Fig. 2 B and Fig. 2 C are the showing an of example illustrating image picking-up apparatus according to an embodiment of the invention
Figure.
Fig. 2 A, Fig. 2 B and Fig. 2 C are the diagrams of an example of the outward appearance illustrating image picking-up apparatus 1.Fig. 2 A is that image is clapped
Take the photograph an example of the front view of equipment 1.Fig. 2 B is an example of the left side view of image picking-up apparatus 1.Fig. 2 C is image
One example of the plane graph of capture apparatus 1.
Image picking-up apparatus 1 has front side image capturing element 1H1, rear image capturing element 1H2 and switch 1H3.Will
It is described below the hardware provided in the inside of image picking-up apparatus 1.
Image picking-up apparatus 1 is by using captured by front side image capturing element 1H1 and rear image capturing element 1H2
Image produces whole day ball image.
Switch 1H3 is so-called " shutter release button ", and is intended that user performs the figure for image picking-up apparatus 1
Input equipment as the instruction of shooting.
Image picking-up apparatus 1 is gripped by the hands of user, such as, as shown in Figure 2 A, and switchs 1H3 and is pushed, to perform
Image taking.
Fig. 3 is an example of the image taking illustrating that image picking-up apparatus is carried out according to an embodiment of the invention
Diagram.
As it is shown on figure 3, user grips image picking-up apparatus 1 by hands, and push the switch in Fig. 2 A, Fig. 2 B and Fig. 2 C
1H3 is to perform image taking.As it is shown on figure 3, image picking-up apparatus 1 can pass through the front side image in Fig. 2 A, Fig. 2 B and Fig. 2 C
The rear image capturing element 1H2 in capturing element 1H1 and Fig. 2 A, Fig. 2 B and Fig. 2 C institute around image picking-up apparatus 1
There is direction photographs images.
Fig. 4 A, Fig. 4 B and Fig. 4 C are to illustrate image captured by image picking-up apparatus according to an embodiment of the invention
The diagram of one example.
Fig. 4 A is an example of the image captured by front side image capturing element 1H1 in Fig. 2 A, Fig. 2 B and Fig. 2 C.Figure
4B is an example of the image captured by rear image capturing element 1H2 in Fig. 2 A, Fig. 2 B and Fig. 2 C.Fig. 4 C is based on figure
Rear side figure in the image captured by front side image capturing element 1H1 in 2A, Fig. 2 B and Fig. 2 C and Fig. 2 A, Fig. 2 B and Fig. 2 C
An example as the image that the image captured by capturing element 1H2 produces.
The image captured by front side image capturing element 1H1 in Fig. 2 A, Fig. 2 B and Fig. 2 C is to have to set at image taking
It is the image of the image capturing range (such as the scope of 180 ° of visual angle) of wide scope in the front direction of standby 1, such as Fig. 4 A
Shown in.Front side image capturing element 1H1 in Fig. 2 A, Fig. 2 B and Fig. 2 C uses the light of the image for shooting with wide scope
Front side image capturing element 1H1 institute in the case of system (the most so-called " fish eye lens "), in Fig. 2 A, Fig. 2 B and Fig. 2 C
The image of shooting has the distorton aberration shown in Fig. 4 A.Captured by front side image capturing element 1H1 in Fig. 2 A, Fig. 2 B and Fig. 2 C
Fig. 4 A in image be so-called " the hemisphere figure with wide scope and distorton aberration on the side of image picking-up apparatus 1
Picture " (it is referred to as " hemisphere image " following).
It is desirable here that visual angle is in more than or equal to 180 ° and less than or equal to 200 ° in the range of.Specifically, when regarding
When angle is more than the hemisphere image in composite diagram 4A in the case of 180 ° and by hemisphere image in Fig. 4 B described below, deposit
In overlapping image region, therefore, synthesis is promoted.
The image captured by rear image capturing element 1H2 in Fig. 2 A, Fig. 2 B and Fig. 2 C is to have to set at image taking
It is the image of the image capturing range (such as the scope of 180 ° of visual angle) of wide scope in the posterior direction of standby 1, such as Fig. 4 B
Shown in.
The image in Fig. 4 B captured by rear image capturing element 1H2 in Fig. 2 A, Fig. 2 B and Fig. 2 C is and Fig. 4 A phase
As hemisphere image.
Image picking-up apparatus 1 performs process (such as distortion correction treatment and synthesis process), and thus from Fig. 4 A
Rear side hemisphere image in front side hemisphere image and Fig. 4 B produces the image shown in Fig. 4 C.Fig. 4 C is such as Mercator ' s projection,
The produced image (i.e. whole day ball image) such as cylindrical equidistant projection.
Here, whole day ball image is not limited to image produced by image picking-up apparatus 1.Whole day ball image can be the most another
Image captured by one camera etc. or image based on the image generation captured by another camera.Expect that whole day ball image is so-called
The image at the captured visual angle having in wide scope such as " omnirange camera ", so-called " wide-angle lens camera ".
Additionally, will describe whole day ball image as example, and image is not limited to this whole day ball image.Image can be
The image etc. that such as compact camera, single lens reflex camera, smart phone etc. are captured.Image can be horizontally or vertically
The panoramic picture etc. extended.
<hardware configuration of image picking-up apparatus>
Fig. 5 is the frame of an example of the hardware configuration illustrating image picking-up apparatus according to an embodiment of the invention
Figure.
Image picking-up apparatus 1 has image capturing unit 1H4, graphics processing unit 1H7, image control unit 1H8, central authorities
Processing unit (CPU) 1H9 and read only memory (ROM) 1H10.Deposit additionally, image picking-up apparatus 1 has static random-access
Reservoir (SRAM) 1H11, dynamic random access memory (DRAM) 1H12 and operation interface (I/F) 1H13.Additionally, image is clapped
Take the photograph equipment 1 and there is network I/F 1H14, wireless I/F 1H15 and antenna 1H16.Each assembly of image picking-up apparatus 1 is by total
Line 1H17 connects, and performs data or the input of signal or output.
Image capturing unit 1H4 has front side image capturing element 1H1 and rear image capturing element 1H2.Place with front
Lens 1H5 that side image-capturing element 1H1 the is corresponding and lens 1H6 corresponding with rear image capturing element 1H2.Front side image
Capturing element 1H1 and rear image capturing element 1H2 is so-called " camera unit ".Front side image capturing element 1H1 and rear side
Image-capturing element 1H2 has optical pickocff (such as complementary metal oxide semiconductors (CMOS) (CMOS) or charge-coupled image sensor
(CCD)).Front side image capturing element 1H1 performs light incident on convertible lens 1H5 to produce the process of view data.
Rear image capturing element 1H2 performs light incident on convertible lens 1H6 to produce the process of view data.Image taking
View data produced by front side image capturing element 1H1 and rear image capturing element 1H2 is exported at image by unit 1H4
Reason unit 1H7.Such as, the front side hemisphere image during view data is Fig. 4 A and the rear side hemisphere image etc. in Fig. 4 B.
Here, front side image capturing element 1H1 and rear image capturing element 1H2 can have the light in addition to lens
Learn element (such as band elimination filter or low pass filter), in order to high image quality ground performs image taking.Additionally, front side image shooting
Element 1H1 and rear image capturing element 1H2 can perform the most so-called " flaw pixel correction " or so-called " hands moves school
Process just ", in order to high image quality ground performs image taking.
Graphics processing unit 1H7 performs for producing Fig. 4 C from the view data of image capturing unit 1H4 from input
The process of whole day ball image.The details of process that will be described below for producing whole day ball image.Here, graphics processing unit
Process performed by 1H7 may be such that another computer is concurrently and executable portion or whole process redundantly.
Image taking control unit 1H8 is the control equipment of each assembly controlling image picking-up apparatus 1.
CPU 1H9 performs the operation for each process performed by image picking-up apparatus 1 or control.Such as, CPU 1H9
Perform every kind of program.Here, CPU 1H9 can include multiple CPU or equipment or multiple kernel, in order to realize due to parallel processing
The acceleration brought.Interiorly or exteriorly provide another hard additionally, the process of CPU 1H9 may be such that at image picking-up apparatus 1
Part resource, and make it perform the part or all of of the process for image picking-up apparatus 1.
ROM 1H10, SRAM 1H11 and DRAM 1H12 are the examples of storage device.ROM1H10 stores such as CPU
Program, data or parameter performed by 1H9.In the case of CPU 1H9 execution program, SRAM 1H11 and DRAM 1H12 storage
Such as program, stand-by data in a program, treat data, the parameter etc. that are produced by program.Here, image picking-up apparatus 1 is permissible
There is attached storage device (such as hard disk).
Operation I/F 1H13 is carried out the interface (example of the process for the operation of user is input to image picking-up apparatus 1
As switched 1H3).Operation I/F 1H13 is operation equipment (such as switch), for the adapter of attended operation equipment or cable, use
Circuit in the signal processed from inputs such as operation equipment, driver, control equipment.Here, operation I/F 1H13 can have
Outut device (such as display).Additionally, operation I/F 1H13 can be so-called " touch pad " etc., wherein, it is integrated with input
Equipment and outut device.Additionally, operation I/F 1H13 can have interface (such as USB (universal serial bus) (USB)), connect storage
Medium (such as flash memory (registered trade mark)), and input data from image picking-up apparatus 1 and output data to image taking
Equipment 1.
Here, switch 1H3 can to have the electric power source switch for performing operation in addition to shutter operation, parameter defeated
Enter switch etc..
Network I/F 1H14, wireless I/F 1H15 and antenna 1H16 are for by wireless network or cable network and outer
Enclose the equipment that image picking-up apparatus 1 is connected by circuit etc. with another computer.Such as, image picking-up apparatus 1 is by network I/F
1H14 is connected to network, and transmits data to smart phone 2.Here, network I/F 1H14, wireless I/F 1H15 and antenna
1H16 can be configured to: by using adapter (such as USB, cable etc.) to connect.
The input of bus 1H17 data etc. between each assembly of image picking-up apparatus 1 or output.Bus 1H17
It is so-called " internal bus ".Bus 1H17 e.g. peripheral component interconnection bus is at a high speed (PCI Express).
Here, image picking-up apparatus 1 is not limited to the situation of two image-capturing elements.Such as, its can have three or
More image-capturing elements.Additionally, image picking-up apparatus 1 can change the image taking angle of an image-capturing element,
To shoot multiple topographies.Additionally, image picking-up apparatus 1 is not limited to use fish-eye optical system.For example, it is possible to make
Use wide-angle lens.
Here, the process performed by image picking-up apparatus 1 is not limited to the process performed by image picking-up apparatus 1.At image
While capture apparatus 1 can send data or parameter, the smart phone 2 or another computer that are connected by network can be performed
Process performed by image picking-up apparatus 1 part or all of.
<hardware configuration of messaging device>
Fig. 6 is the hardware configuration illustrating the messaging device including smart phone according to an embodiment of the invention
The block diagram of one example.
Messaging device is computer.Messaging device can be such as notebook in addition to smart phone
People's computer (PC), personal digital assistant (PDA), panel computer, mobile phone etc..
Smart phone 2 as an example of messaging device has attached storage device 2H1, main storage device
2H2, input-output apparatus 2H3, state sensor 2H4, CPU 2H5 and network I/F 2H6.Each assembly of smart phone 2
It is connected to bus 2H7, and performs data or the input of signal or output.
Attached storage device 2H1 storage information (such as includes that the control owing to CPU 2H5, control equipment etc. is by CPU
Every kind of data, parameter or the program of the intermediate object program of the process that 2H5 performs).Attached storage device 2H1 is such as hard disk, flash
Solid-state drive (SSD) etc..Here, the information stored in attached storage device 2H1 be such that this information part or
All can be stored in file server of being connected to network I/F 2H6 etc., rather than in attached storage device 2H1.
Main storage device 2H2 is that main storage device (such as treats the memory area used by the program performed by CPU 2H5
(the most so-called " memorizer ")).Main storage device 2H2 storage information (such as data, program or parameter).Main storage device 2H2
It is such as static RAM (SRAM), DRAM etc..Main storage device 2H2 can have for performing in memorizer
Storage or the control equipment of the acquisition from memorizer.
Input-output apparatus 2H3 is to have the outut device for performing display and for inputting the operation of user
The equipment of the function of input equipment.
Input-output apparatus 2H3 is so-called " touch pad ", " peripheral circuit ", " driver " etc..
Input-output apparatus 2H3 performs for user shows such as predetermined pattern user interface (GUI) or smart phone
The process of the image inputted in 2.
Such as, in the case of the GUI that user operation has display or image, input-output apparatus 2H3 performs to be used for
Input the process of the operation of this user.
State sensor 2H4 is the sensor of the state for detecting smart phone 2.State sensor 2H4 is gyroscope
Sensor, angular transducer etc..State sensor 2H4 determines for example whether by relative to horizontal predetermined or bigger angle
Degree provides the side that smart phone 2 is had.It is to say, state sensor 2H4 closes then in the state of longitudinal direction attitude
Down or provide smart phone 2 to perform detection when horizontal direction attitude.
CPU 2H5 performs setting provided in the calculating in each process performed by smart phone 2 and smart phone 2
Standby control.Such as, CPU 2H5 performs every kind of program.Here, CPU 2H5 can include multiple CPU or equipment or multiple interior
Core, to perform process concurrently, redundantly or dispersedly.Additionally, be such that for the process of CPU 2H5 can be in intelligence
Phone 2 interiorly or exteriorly provides another hardware resource, to perform the part or all of of the process for smart phone 2.Such as, intelligence
Can have the Graphics Processing Unit (GPU) for performing image procossing etc. by phone 2.
Network I/F 2H6 be by wireless network or cable network be connected to another computer for inputting or exporting number
Equipment according to the such as antenna, peripheral circuit, driver etc. waited.Such as, smart phone 2 perform for owing to CPU 2H5 and
Network I/F 2H6 is from the process of image picking-up apparatus 1 input image data.Smart phone 2 perform for owing to CPU 2H5 and
Network I/F 2H6 is by the process of the outputs such as predefined parameter to image picking-up apparatus 1.
<for the disposed of in its entirety of image capturing apparatus>
Fig. 7 is to illustrate according to an embodiment of the invention for the example of disposed of in its entirety of image capturing apparatus
Sequentially diagram.
In step S0701, image picking-up apparatus 1 performs the process for producing whole day ball image.
Fig. 8 A, Fig. 8 B, Fig. 8 C and Fig. 8 D are the examples illustrating whole day ball image according to an embodiment of the invention
Diagram.
Fig. 8 A, Fig. 8 B, Fig. 8 C and Fig. 8 D are to illustrate for showing at one of the process of step S0701 generation whole day ball image
The diagram of example.
Fig. 8 A is to connect into firing angle relative to optical axis in the horizontal direction or Fig. 4 A equal in vertical direction by straight line
In hemisphere image in the diagram shown in such mode of position.By in horizontal direction indicated below relative to optical axis
Relative to the angle of incidence φ of this optical axis on incidence angle θ and vertical direction.
Similar to Fig. 8 A, Fig. 8 B is to connect into firing angle relative to optical axis in the horizontal direction or in vertical direction by straight line
The diagram shown in such mode of the equal position in the hemisphere image in Fig. 4 B.
Fig. 8 C is the diagram illustrating an example according to the image handled by Mercator projection.Fig. 8 C is such as to make figure
Image under state shown in 8A or Fig. 8 B is corresponding with the look-up table (LUT) produced in advance etc. and according to cylindrical equidistant projection quilt
The example of situation about processing.
Fig. 8 D is by the synthesis that the process shown in Fig. 8 C is applied to the image that Fig. 8 A and Fig. 8 B is provided for synthesis
The example processed.
As in fig. 8d, synthesis processes and is: such as, by using the multiple images under the state shown in such as Fig. 8 C to produce
Raw image.Here, synthesis processes the process being not limited to use in the image of arranged in succession pretreatment simply.Such as, θ is not being pressed
In the case of the center of=180 ° of whole day ball images provided in horizontal direction, synthesis processes and could be for by this way
Perform the process that synthesis processes: pre-in the center of whole day ball image, and Fig. 4 B of the image layout of the pretreatment in Fig. 4 A
The image processed is divided and arranges to the left and right side, thus produces the whole day ball image shown in Fig. 4 C.
Here, be not limited to the process according to cylindrical equidistant projection for producing the process of whole day ball image.Such as, with so
Mode so-called " overturning " situation is provided: the alignment of the pixel on the direction of such as Fig. 8 B, φ is relative to the alignment in Fig. 8 A
It is reverse, and the alignment of the pixel on the direction of θ is heterochiral relative to the alignment in Fig. 8 A.In reverse situation
Under, image picking-up apparatus 1 can perform the image of the pretreatment under the state for rolling or rotate 8B and reach the process of 180 °, from
And in alignment with the alignment of the pixel on the direction of the φ in Fig. 8 A and the direction of θ.
Additionally, the image under the state of correction chart 8A or Fig. 8 B can be performed for producing the process of whole day ball image
Provided in distorton aberration.Additionally, the place for improving picture quality can be performed for producing the process of whole day ball image
Reason (such as shadow correction, Gamma correction, white balance, hands shift calibrating, optics black correction process, flaw pixel correction process,
Edge-enhancement process, linearity correction process etc.).
Here, such as, overlapping with the image capturing range of another hemisphere image at the image capturing range of hemisphere image
In the case of, synthesis processes can perform correction by utilizing overlapping range, to perform this synthesis process by high accuracy.
Owing to the process for producing whole day ball image, image picking-up apparatus 1 is from half captured by image picking-up apparatus 1
Ball image produces whole day ball image.
In step S0702, smart phone 2 performs for obtaining in the process of whole day ball image produced by step S0701.
The situation of the whole day ball image in Fig. 8 D is obtained below using describing smart phone 2 as example.
In step S0703, smart phone 2 produces whole day ball panorama sketch from the whole day ball image acquired in step S0702
Picture.
Fig. 9 is the diagram of the example illustrating whole day ball panoramic picture according to an embodiment of the invention.
In step S0703, the whole day ball that smart phone 2 performs to produce in Fig. 9 for the whole day ball image from Fig. 8 D is complete
The process of scape image.Whole day ball panoramic picture is that the such mode being applied on spherical shape with whole day ball image is provided
Image.
By such as application programming interface (API) (such as embedded system Open GL (registered trade mark) (Open GL
ES) process for producing whole day ball panoramic picture) is realized.
By dividing an image into triangle, link vertex of a triangle P (it is referred to as " summit P " following) and answer
Whole day ball panoramic picture is produced with its polygon.
In step S0704, smart phone 2 execution is used for the place of the operation of the output that user is inputted for starting image
Reason.In step S0704, smart phone 2 such as reduces and exports in whole day ball panoramic picture produced by step S0703, also
That is, show so-called " thumbnail image ".In the case of multiple whole day ball panoramic pictures are stored in smart phone 2, intelligence
Energy phone 2 exports the list of thumbnail image, for example, so that user selects image to be output.In step S0704, Intelligent electric
Words 2 execution such as makes user select the process of operation of an image from the list of thumbnail image for input.
In step S0705, smart phone 2 performs for producing based in the whole day ball panoramic picture selected by step S0704
The process of raw initial pictures.
Figure 10 A, Figure 10 B, Figure 10 C and Figure 10 D are for illustrating the one of initial pictures according to an embodiment of the invention
The diagram of individual example.
Figure 10 A is to illustrate the three-dimensional coordinate for the example illustrating initial pictures according to an embodiment of the invention
The diagram of system.
As shown in Figure 10 A, the three-dimensional system of coordinate with XYZ axle will be described below.Virtual camera 3 is put by smart phone 2
Put the position at initial point, and at the viewpoint of virtual camera 3, produce every kind of image.In the case of the coordinate system of Figure 10 A,
Whole day ball panoramic picture is represented by such as sphere CS.Virtual camera 3 is corresponding with the viewpoint of the user of viewing whole day ball panoramic picture,
Wherein, this whole day ball panoramic picture is sphere CS in its position placed.
It is for illustrating according to an embodiment of the invention for the example of presumptive area of virtual camera at 10B
Diagram.
Figure 10 B is the situation by three plan representation Figure 10 A.Figure 10 B is the initial point that virtual camera 3 is placed on Figure 10 A
The situation at place.Figure 10 C is according to an embodiment of the invention for the projection of an example of presumptive area of virtual camera
View.
Presumptive area T is that the visual angle of virtual camera 3 projects to the region on sphere CS.Smart phone 2 is based on presumptive area T
Produce image.
Figure 10 D is for illustrating according to an embodiment of the invention for determining the presumptive area for virtual camera
The diagram of one example of information.
Presumptive area T is determined by such as presumptive area information (x, y, α).
Visual angle α is the angle of the angle of the instruction virtual camera 3 shown in Figure 10 D.In presumptive area T represented by the α of visual angle
Visual angle, diagonal angle 2L in the case of, the coordinate of the central point CP of this presumptive area T by presumptive area information (x, y) represent.
Here, represent from virtual camera 3 to the distance of central point CP by following formula (1):
F=tan (α/2) (Formula 1)
Initial pictures is by determining the image that presumptive area T is provided based on the initial setting up pre-set, and is
Produce based on presumptive area T determined by this.Initial setting up is such as (x, y, α)=(0,0,34) etc..
In step S0706, smart phone 2 makes user perform the operation for being switched to image edit mode.Here,
In the case of user does not perform the operation for being switched to image edit mode, smart phone 2 exports such as initial pictures.
In step S0707, smart phone 2 performs for output for editing the process of the output image of image.
Figure 11 is the output image illustrated according to an embodiment of the invention under the original state editing image
The diagram of one example.
Output image is the output image 21 under such as original state.Output image has the editor's image under original state
31 and original state under change image 41.
Output image shows the button of the graphical user interface (GUI) of the operation for accepting user.GUI is such as to obscure
Edit button 51, elimination Edit button 52 etc..Here, output image can have another GUI.
Editor's image 31 under original state is such as in initial pictures produced by step S0705.
Change image 41 under original state for example is by reducing in whole day ball panoramic picture produced by step S0703
The image provided.
User edits image under image edit mode, therefore, operation is applied to the editor shown in the output image
Image or change image.
In step S0708, smart phone 2 performs for making user input the process of the operation for editing image.
In step S0709, smart phone 2 obtains user's coordinate for input-output apparatus 2H3 input operation.In step
Rapid S0709, smart phone 2 perform for determine based on acquired coordinate be original state in fig. 11 under editor is schemed
Region as 31 performs to perform under operation original state the most in fig. 11 the process of operation for changing the region of image 41.
Picture editting is the editor that operation based on user performs.The editor in region to be output is for based on change figure
As the editor that the region to be output in image is changed or based on editing image for the editor performed by presumptive area.
In the case of operation is applied to the region of change image by step S0709, perform for changing district to be output
The editor in territory.
In the case of operation is applied to edit the region of image in step S0709, perform to treat based on editor's image for
The editor that presumptive area performs.
In the case of user operation changes image (determining the region of change image in step S0709), smart phone 2 enters
Enter step S0710.In the case of image (determining the region of editor's image in step S0709) is edited in user operation, Intelligent electric
Words 2 entrance step S0712.
<for changing the editor in region to be output>
Figure 12 A, Figure 12 B and Figure 12 C are to edit region to be output according to an embodiment of the invention for illustrating
The diagram of one example.
Figure 12 A is illustrate output image after editor region to be output according to an embodiment of the invention one
The diagram of individual example.
Output image is such as output image 22 after editor region to be output.In editor region to be output it
After output image 22 have the editor's image 32 after editor region to be output and in editor region to be output it
After change image 42.
Editor's image 32 after editor region to be output be by original state in fig. 11 under scheme editor
As changing image as produced by Figure 10 A, Figure 10 B, Figure 10 C and presumptive area T shown in Figure 10 D in 31.
After editor region to be output change image 42 be by original state in fig. 11 under at change figure
As changing image produced by Figure 10 A, Figure 10 B, Figure 10 C and presumptive area T shown in Figure 10 D in 41.
Figure 12 B is illustrate presumptive area after editor region to be output according to an embodiment of the invention one
The diagram of individual example.
Such as, the viewpoint of the situation of the virtual camera 3 when pan as shown in Figure 12 B rotates Figure 10 B provides
Edit the output image 22 after region to be output.
Figure 12 C is illustrate operation in the case of editor region to be output according to an embodiment of the invention one
The diagram of individual example.
The editor in region to be output is performed in such mode of the screen area of user operation output change image.
It is such as changing to be output relative to the left direction of image and right direction in the operation that step S0708 is to be entered
The operation etc. in region.
In the case of Figure 12 A, Figure 12 B and Figure 12 C, the operation that user is inputted is such screen: by such as Figure 12 C
Shown in the left direction and right direction of this screen with the change image 41 under the original state in finger tracking Figure 11, i.e. so-called
" slide " etc..
Here, the input quantity about slide is provided as (dx, dy).
Pass between the polar coordinate system (φ, θ) of the whole day ball in Fig. 8 A, Fig. 8 B, Fig. 8 C and Fig. 8 D Yu input quantity (dx, dy)
It is to be represented by following formula (2):
φ=k × dx
θ=k × dy (Formula 2)
In above-mentioned formula (2), k is performed for the predetermined constant adjusted.
Output image changes based on the input quantity inputted for slide, and therefore, user can by sphere (such as
Tellurion) rotate sensation operation image.
Here, in order to simplify process, the where input slide at screen can not be considered.It is to say, i.e.
Any position making the screen changing image 41 under output original state performs slide, it is also possible to for formula
(2) input quantity (dx, dy) the input similar value in.
Change image 42 after the region that editor is to be output performs three based on (φ, the θ) calculated according to formula (2)
The perspective projection transformation of the coordinate (Px, Py, Pz) of the summit P in dimension space.
In the case of user performs slide by input quantity (dx2, dy2) in the case of Figure 12 A, whole day ball
Polar coordinate system (φ, θ) is represented by following formula (3).
φ=k × (dx+dx2)
θ=k × (dy+dy2) (Formula 3)
As shown in above-mentioned (3), total value based on the input quantity for each slide calculates the polar coordinate system of whole day ball
(φ, θ).Even in the case of performing multiple slides etc., perform the calculating of the polar coordinate system (φ, θ) of whole day ball, and
Thus, it is possible to keep constant operability.
Here, edit region to be output to be not limited to pan rotation.For example, it is possible to realize upper at image of virtual camera 3
Lower section Sloped rotating upwards.
The operation inputted in step S0708 is such as zooming in or out the operation etc. in region to be output.
Figure 13 A and Figure 13 B is for illustrating zooming in or out of region the most to be output
The diagram of an example.
In the case of performing to amplify region to be output, the operation that user is inputted is such that two fingers defeated
Go out and stretch on the screen changing image 41 under the original state in Figure 11, as shown in FIG. 13A, the most so-called " pinching out operation "
Deng.
In the case of performing to reduce region to be output, the operation that user is inputted is such that two fingers defeated
Go out on the screen changing image 41 under the original state in Figure 11 close to each otherly mobile, as shown in Figure 13 B, the most so-called
" kneading operation " etc..
As long as here, the position that first finger of user contacts is provided in have the region changing image shown on it
In, pinch out or kneading operation is exactly enough, and can be the region using subsequently and there is editor's image shown on it
Operation.Furthermore, it is possible to perform operation by so-called " writing pencil " as the instrument for operating touchpad etc..
In the case of the operation shown in input Figure 13 A and Figure 13 B, smart phone 2 performs so-called " zoom process ".
It is for producing the figure with the presumptive area that the operation inputted based on user is zoomed in or out that zoom processes
The process of picture.
In the case of the operation shown in input Figure 13 A and Figure 13 B, smart phone 2 obtains the behaviour inputted based on user
The knots modification dz made.
It is the process for performing according to following formula (4) to calculate based on knots modification dz that zoom processes:
α=α 0+m × dz (Formula 4)
α indicated in above-mentioned formula (4) is Figure 10 A, Figure 10 B, Figure 10 C and the visual angle of the virtual camera 3 shown in Figure 10 D
α.M indicated in formula (4) is the coefficient for adjusting zoom amount.α 0 indicated in formula (4) is under original state
Visual angle α, i.e. visual angle α in the case of step S0705 produces initial pictures.
Input Figure 13 A and Figure 13 B shown in operation in the case of, smart phone 2 about projection matrix by use root
The visual angle α calculated according to formula (4) determines the scope of presumptive area T in Figure 10 A, Figure 10 B, Figure 10 C and Figure 10 D.
In the case of performing the operation for providing knots modification dz2 according to formula (4) execution calculating and user, intelligence
Phone 2 performs calculating according to following formula (5):
α=α 0+m × (dz+dz2) (Formula 5)
As indicated by above-mentioned (5), owing to the operation total value based on knots modification shown in Figure 13 A and Figure 13 B
Calculate visual angle α.Even performing in the case of the multiple operations shown in Figure 13 A and Figure 13 B wait, the visual angle α of execution sky sphere
Calculating, and thus, it is possible to keep constant operability.
It is not limited to according to formula (4) and the process of formula (5) here, zoom processes.
Zoom can be realized by the change of the visual angle α of combination virtual camera 3 and the position of viewpoint to process.
Figure 14 is the diagram for illustrating the example that another zoom processes according to an embodiment of the invention.
Figure 14 is for illustrating the model representation that another zoom processes.Sphere CS in Figure 14 and Figure 10 A, Figure 10 B, figure
Sphere CS in 10C with Figure 10 D is similar.In fig. 14, the radius of sphere CS is described as " 1 ".
Initial position at virtual camera 3 provides the initial point in Figure 14.The position of virtual camera 3 is at optical axis (i.e. Figure 10 A
In z-axis) upper change.The amount of movement d of virtual camera 3 can be represented by the distance away from initial point.Such as, virtual camera 3
In situation (i.e. the situation of original state) at initial point, amount of movement d is " 0 ".
Amount of movement d and visual angle α based on virtual camera 3 is represented in Figure 10 A, Figure 10 B, Figure 10 C and Figure 10 D by visual angle ω
The scope of presumptive area T.Visual angle ω shown in Figure 14 is to be positioned at the situation (i.e. the situation of d=0) at initial point at virtual camera 3
Under visual angle.
In the situation (i.e. the situation of d=0) that virtual camera 3 is positioned at initial point, visual angle ω is identical with visual angle α.Virtual
In the camera 3 situation (i.e. the situation that the value of d increases) away from shift of origin, visual angle ω and visual angle α represents different scopes.
It is the process for changing visual angle ω that another zoom processes.
Figure 15 is the table for illustrating the example that another zoom processes according to an embodiment of the invention.
Illustrative table 4 illustrates that visual angle ω is the example of the situation of the scope of 60 ° to 300 °.
As shown in illustrative table 4, smart phone 2 determines the preferential visual angle α changing virtual camera 3 based on zoom specification value ZP
With which in amount of movement d.
" scope " is based on scope determined by zoom specification value ZP.
" output amplification " is the output amplification of the image calculated based on image parameter determined by the process of another zoom
Rate.
" zoom specification value ZP " is the value corresponding with visual angle to be output.Another zoom processes and changes based on zoom specification value ZP
Become for determining amount of movement d and the process of visual angle α.For staying in during another zoom processes the process performed, based on illustrative table 4
Shown zoom specification value ZP determines one of four kinds of methods.The scope of zoom specification value ZP is divided into the model of four scope: A-B
Enclose, the scope of the scope of B-C, C-D and the scope of D-E.
" visual angle ω " be process with another zoom determined by visual angle ω corresponding to image parameter.
" change parameter " is the description illustrating each parameter changed based on zoom specification value ZP in four kinds of methods.
" labelling " is the labelling for " change parameter ".
" viewWH " in illustrative table 4 is width or the value of height representing output area.At output area laterally
On the longest in the case of, " viewWH " is the value of width.In the case of output area is the longest, " viewWH " is high
The value of degree.It is to say, " viewWH " is the value of the size representing the output area on longitudinal direction.
" imgWH " in illustrative table 4 is width or the value of height representing output image.At output area in the horizontal
In the case of the longest, " imgWH " is the value of the width of output image.In the case of output area is the longest,
" imgWH " is the value of the height of output image.It is to say, " imgWH " is the size representing the output image on longitudinal direction
Value.
" imageDeg " in illustrative table 4 is the value of the angle of the indication range representing output image.Representing output
In the case of the width of image, " imageDeg " is 360 °.In the case of the height representing output image, " imageDeg " is
180°。
Figure 16 A, Figure 16 B, Figure 16 C, Figure 16 D and Figure 16 E are for illustrating that another becomes according to an embodiment of the invention
The diagram of one example of burnt " scope " processed.
The feelings of so-called " reducing " that will describe as example in Figure 16 A, Figure 16 B, Figure 16 C, Figure 16 D and Figure 16 E below
Condition.Here, the left side in every width figure of Figure 16 A, Figure 16 B, Figure 16 C, Figure 16 D and Figure 16 E illustrates of image to be output
Example.The right figure in every width figure in Figure 16 A, Figure 16 B, Figure 16 C, Figure 16 D and Figure 16 E is to be shown in the model shown in Figure 14 to show
The diagram of one example of the state of the virtual camera 3 during output in figure.
Figure 16 A is to input zoom specification in such mode that " scope " in the illustrative table 4 in Figure 15 is " A-B "
One example of the output in the case of value ZP.In the case of " A-B ", the visual angle α of virtual camera 3 is fixed on such as α=60 °
Place.In the case of " A-B ", as shown in Figure 16 A, when visual angle α is fixing condition, the amount of movement d of virtual camera 3 changes.
In the case of when visual angle α is fixing condition, the amount of movement d of virtual camera 3 increases, visual angle ω increases.Situation at " A-B "
Under, visual angle α is fixing, and the amount of movement d of virtual camera 3 increases, such that it is able to realize reducing process.Here, at " A-B "
In the case of the amount of movement d of virtual camera 3 be the radius from 0 to sphere CS.It is to say, Figure 16 A, Figure 16 B, Figure 16 C,
In the case of Figure 16 D and Figure 16 E, the radius of sphere CS is " 1 ", and therefore, the amount of movement d of virtual camera 3 is in the range of 0-1
Value.The amount of movement d of virtual camera 3 is the value corresponding with zoom specification value ZP.
Figure 16 B is to input zoom specification in such mode that " scope " in the illustrative table 4 in Figure 15 is " B-C "
One example of the output in the case of value ZP." B-C " be zoom specification value ZP be the situation of the value bigger than " A-B ".At " B-
C " in the case of, the amount of movement d of virtual camera 3 is fixed on the value of the edge for virtual camera 3 is positioned at sphere CS.Also
That is, as shown in fig 16b, the amount of movement d of virtual camera 3 is fixed on " 1 " place of the radius as sphere CS.At " B-C "
In the case of, when the amount of movement d of virtual camera 3 is fixing condition, visual angle α changes.It is solid at the amount of movement d of virtual camera 3
In the case of during fixed condition, visual angle α increases, visual angle ω increases to Figure 16 B from Figure 16 A.In the case of " B-C ", virtual camera
The amount of movement d of 3 is fixing, and visual angle α increases, such that it is able to realize reducing process.In the case of " B-C ", visual angle α counts
Calculate as ω/2.In the case of " B-C ", the scope of visual angle α be from as 60 ° of value fixing in the case of " A-B " to
120°。
In the case of " A-B " or " B-C ", visual angle ω is identical with zoom specification value ZP.In " A-B " or the situation of " B-C "
Under, the value of visual angle ω increases.
Figure 16 C is to input zoom specification in such mode that " scope " in the illustrative table 4 in Figure 15 is " C-D "
One example of the output in the case of value ZP." C-D " be zoom specification value ZP be the situation of the value bigger than " B-C ".At " C-
D " in the case of, visual angle α is fixed at such as α=120 °.In the case of " C-D ", as shown in figure 16 c, it is fixing at visual angle α
Condition time, the amount of movement d of virtual camera 3 changes.When visual angle α is fixing condition, the amount of movement d of virtual camera 3 increases
In the case of, visual angle ω increases.Virtual camera is calculated according to formula based on zoom specification value ZP shown in the illustrative table 4 in Figure 15
The amount of movement d of 3.In the case of " C-D ", the amount of movement d of virtual camera 3 changes into maximum display distance dmax1.
Maximum display distance dmax1 is to show sphere CS so that distance maximum in the output area of smart phone 2.Defeated
Going out region is the size etc. that such as smart phone 2 exports the screen of image etc..Maximum display distance dmax1 is such as Figure 16 D
Situation.Maximum display distance dmax1 is calculated according to following formula (6).
" viewW " in above-mentioned formula (6) is the value of the width of the output area representing smart phone 2.Above-mentioned formula (6)
In " viewH " be the value of height of the output area representing smart phone 2.Similar content explained below.
The value of based on the output area as smart phone 2 " viewW " and " viewH " calculates maximum display distance
dmax1。
Figure 16 D is to input zoom specification in such mode that " scope " in the illustrative table 4 in Figure 15 is " D-E "
One example of the output in the case of value ZP." D-E " be zoom specification value ZP be the situation of the value bigger than " C-D ".At " D-
E " in the case of, visual angle α is fixed at such as α=120 °.In the case of " D-E ", as seen in fig. 16d, it is fixing at visual angle α
Condition time, the amount of movement d of virtual camera 3 changes.The amount of movement d of virtual camera 3 changes into restriction display distance dmax2.Limit
Display distance dmax2 processed be show sphere CS in case in be connected on the distance in the output area of smart phone 2.At following formula
(7) calculate in and limit display distance dmax2.
Limit the situation that display distance dmax2 is such as Figure 16 E.
The value of based on the output area as smart phone 2 " viewW " and " viewH " calculates and limits display distance
dmax2.Limit maximum magnitude that display distance dmax2 represents that smart phone 2 can export (i.e. the amount of movement d's of virtual camera 3
Limits value).Embodiment can be limited: the scope shown in illustrative table 4 in Figure 15 includes that zoom is advised by such mode
Lattice value ZP, say, that the value of the amount of movement d of virtual camera 3 is less than or equal to limiting display distance dmax2.Owing to this
Limiting, the condition being adapted as the screen of output area at output image or the image with predetermined output amplification export use
During the condition at family, it is provided that smart phone 2, such that it is able to realize reducing.
Owing to the process for " D-E ", smart phone 2 is so that it is that whole day ball is complete that user identifies output image
Scape.
Here, in the case of " C-D " or " D-E ", visual angle ω differs with zoom specification value ZP.Additionally, as in Figure 15
Illustrative table 4 and 16A, Figure 16 B, Figure 16 C, Figure 16 D and Figure 16 E shown in, visual angle ω is continuous print in each scope, but
This visual angle ω is because increasing unevenly towards reducing of Radix Rumicis side.It is to say, change at the amount of movement d of virtual camera 3
In the case of " C-D ", visual angle ω increases with this amount of movement d of virtual camera 3.At " the D-that the amount of movement d of virtual camera 3 changes
E " in the case of, visual angle ω reduces with this amount of movement d of virtual camera 3.The reduction of the amount of movement d of the virtual camera 3 in " D-E "
It is because the perimeter of reflection sphere CS causes.The situation of the wide visual field more than or equal to 240 ° is specified in zoom specification value ZP
Under, smart phone 2 changes the amount of movement d of virtual camera 3, and thus, it is possible to is exported by the image with less strange sense
User, and change visual angle ω.
In the case of zoom specification value ZP changes towards wide-angle direction, visual angle ω frequently increases.Increase at visual angle ω
In the case of, the visual angle α of virtual camera 3 fixed by smart phone 2, and increases the amount of movement d of virtual camera 3.Smart phone 2 is fixed
The visual angle α of virtual camera 3, and thus, it is possible to reduce the increase of this visual angle α of virtual camera 3.Smart phone 2 reduces virtual
The increase of the visual angle α of camera 3, and thus, it is possible to the image with less distortion is exported user.At virtual camera 3
In the case of visual angle α is for fixing, smart phone 2 increases the amount of movement d of virtual camera 3, say, that moved by virtual camera 3
For away from, and thus, it is possible to user provides Radix Rumicis show open sensation.Additionally, for virtual camera 3 moved into
Away from movement similar to the movement when the mankind confirm wide scope, therefore, smart phone 2 is owing to for moving virtual camera
Move for away from movement and realize that there is reducing of less strange sense.
In the case of " D-E ", visual angle ω reduces with changing zoom specification value ZP towards wide-angle direction.At " D-E "
In the case of, smart phone 2 reduces visual angle ω, and thus, it is possible to provides the sensation away from sphere CS to user.Smart phone 2
There is provided the sensation away from sphere CS to user, and thus, it is possible to the image with less strange sense is exported user.
Therefore, processing owing to another zoom shown in the illustrative table 4 in Figure 15, smart phone 2 can will have relatively
The image of few strange sense exports user.
Here, the amount of movement d or visual angle α of the virtual camera 3 shown in illustrative table 4 that embodiment is not limited in only Figure 15 change
Situation about becoming.Embodiment is sufficient that become and preferentially changes void when condition shown in illustrative table 4 in fig .15
Intend amount of movement d or the pattern of visual angle α of camera 3, and fixed value can change into sufficiently small value, such as, for adjustment.
Additionally, embodiment is not limited to reduce.Embodiment can realize such as amplifying.
During here, the situation editing region to be output is not limited to for changing image execution operation.Such as, right
In the case of editor's image performs operation, smart phone 2 can edit region to be output.
<treating editor presumptive area performed based on editor's image>
Treat that based on editing image for the editor that presumptive area performs be fuzzy editor, its fuzzy intended pixel.Here, it is right
Edit in another, it is provided that the change of the erasing of specified image range, the tone of image or color depth etc., specify
The color change etc. of image range.
User couple after region to be output will be edited following be described in Figure 12 A, Figure 12 B and Figure 12 C as example
The situation of fuzzy editor is performed in output image 22.
Perform to push the operation of fuzzy Edit button 51 user in the case of, smart phone 2 make user input for
The region of editor's image 32 of the output image 22 after region to be output is edited in display in Figure 12 A, Figure 12 B and Figure 12 C
So-called " clicking operation ".
Smart phone 2 performs the process for fuzzy preset range centered by the point that user is clicked on.
Figure 17 A and Figure 17 B be for illustrate treat according to an embodiment of the invention based on editor image for fate
The diagram of one example of the editor that territory performs.
Figure 17 A is the diagram of an example for illustrating fuzzy according to an embodiment of the invention editor.Figure 17 A is
It is shown in the diagram exporting image 23 after fuzzy editor.Output image 23 after fuzzy editor has fuzzy editor
Editor's image 33 afterwards and the change image 43 after fuzzy editor.
By fuzzy editor being applied in Figure 12 A, Figure 12 B and Figure 12 C the output after the region that editor is to be output
Image produces the editor's image 33 after fuzzy editor.Average, bandpass filtering by such as Gaussian function, peripheral pixels
Edit Deng realizing obscuring.The similar fuzzy editor such as obscuring editing area 5 is shown.
It is applied to fuzzy editor change image.Smart phone 2 performs the coordinate calculating three-dimensional space of the point clicked on from family
Point (Px, Py, Pz) between.Smart phone 2 calculates from two-dimensional coordinate by using the inverse transformation of the perspective projection transformation of the cone
(Px, Py, Pz).Two-dimensional coordinate does not exist the information of the degree of depth, therefore, by using the point on sphere and simultaneous equations to calculate
(Px, Py, Pz).The symbol of the Pz in projected coordinate system is constant, and therefore, smart phone 2 can calculate simultaneous equations.Whole day
The coordinate of ball panoramic picture is corresponding with (Px, Py, Pz), and therefore, smart phone 2 can calculate complete from (Px, Py, the Pz) calculated
Coordinate on celestial sphere panoramic picture.Therefore, when reflecting the condition of fuzzy editor as shown in Figure 17 A, it is provided that fuzzy editor it
After change image 43.
Figure 17 B is the diagram for illustrating the example eliminating fuzzy editor according to an embodiment of the invention.
The editor being applied to presumptive area based on editor's image is to eliminate the fuzzy editor obscured for this fuzzy editor
The editor of the fuzzy editor in region 5.
In the case of user performs the operation that pushing eliminates Edit button 52, smart phone 2 output is for having
The output image 24 eliminating editor filling region 6 is shown on the fuzzy editing area 5 of the fuzzy editor of application.Such as Figure 17 B institute
Show, for eliminating the fuzzy editor in the editor's image 33 after the output image 24 of editor is the fuzzy editor in Figure 17 A
The image filling region 6 is shown on region 5.User performs (i.e. have applied fuzzy for shown filling region 6
Region) clicking operation.Smart phone 2 performs for eliminating the mould in the preset range centered by the point that user is clicked on
Stick with paste the process of editor.It is to say, in the editor for eliminating fuzzy editor, smart phone 2 provides at editor Figure 12 A, figure
In the editor's image 33 after fuzzy editor under the state of the output image 22 after region to be output in 12B and Figure 12 C
Preset range centered by the point that user is clicked on.
Once the shooting image of the building of the face of people or no photographing is provided on the internet or shares, it is possible to produce
Trouble.Specifically, in the case of shooting has the panoramic picture of wide scope, may frequently photograph in width scope is a lot
The image of object.Therefore, owing to for obscuring the process of possible problematic object when providing or being shared, user can subtract
Few trouble.Owing to treating to be applied to the editor of presumptive area based on editor's image, smart phone 2 can promote for broad image
In captured people face operation.Therefore, owing to treating to be applied to the editor of presumptive area, intelligence based on editor's image
Phone 2 is so that user is easily performed image manipulation.
Here, in the case of the region that executive editor is to be output, smart phone 2 can be according to amplification based on editor's figure
Pictures etc. change the scope of the editor being applied to presumptive area.
In step S0710, smart phone 2 calculates the amount of movement of coordinate to be output.It is to say, in step S0710, intelligence
Figure 10 A, Figure 10 B, Figure 10 C and Figure 10 D corresponding with the slide of user can be calculated by phone 2 based on the most above-mentioned formula (2)
In the position of presumptive area T.
In step S0711, smart phone 2 updates Figure 10 A, Figure 10 B, Figure 10 C in the position calculated in step S0710
Position with presumptive area T in Figure 10 D.
In step S0712, smart phone 2 calculates the coordinate of the point as edit object.It is to say, in step S0712,
Smart phone 2 calculates the coordinate corresponding with the clicking operation of user, and performs the calculating for projecting on three-dimensional coordinate.
In step S0713, smart phone 2 calculates centered by the coordinate that step S0712 is calculated and schemes based on editor
As the presumptive area edited.It is to say, in step S0713, smart phone 2 calculates as specified by the clicking operation of user
Point maybe this point periphery and as the pixel of the object etc. for fuzzy editor.
In step S0714, smart phone 2 produces editor's image.Perform for changing image in step S0714 user
In the case of operation, smart phone 2 produces based on presumptive area T updated in step S0711 and changes image.User in step
In the case of rapid S0714 performs the operation for editing image, smart phone 2 produces editor's image, and wherein, Fuzzy Processing reflects
In the pixel calculated in step S0713.
In step S0715, smart phone 2 produces and changes image.Perform for changing image in step S0715 user
In the case of operation, smart phone 2 produces based on presumptive area T updated in step S0711 and changes image.User in step
In the case of rapid S0715 performs the operation for editing image, smart phone 2 produces fuzzy right as in step S713 of instruction
The change image of the position of elephant.
Smart phone 2 repeats step S0708 to the process of step S0715.
<process on smart phone>
Figure 18 is the flow process of the example illustrating the disposed of in its entirety on smart phone according to an embodiment of the invention
Figure.
In step S1801, smart phone 2 performs to obtain the process of image etc. for the image picking-up apparatus 1 from Fig. 1.
Corresponding with the process of step S0702 in the figure 7 in processing of step S1801.
In step S1802, smart phone 2 performs the process for producing panoramic picture.Based on acquired in step S1801
Image perform in the process of step S1802.Corresponding with the process of step S0703 in the figure 7 in processing of step S1802.
In step S1803, smart phone 2 performs for making user select the process of image to be output.In step
Processing of S1803 is corresponding with the process of step S0704 in the figure 7.Specifically, for making user select figure to be output
The process of picture is for exporting thumbnail image or providing for making user perform the place of UI etc. of the operation for thumbnail image
Reason.
In step S1804, smart phone 2 performs the process for producing initial pictures.Step S1804 process with
The process of step S0705 in Fig. 7 is corresponding.In step S1804, smart phone 2 produces and exports user in step S1803 institute
The image selected is as initial pictures.
In step S1805, smart phone 2 performs about whether the determination performing to be switched to the pattern for editing image.
Processing based on whether provide the user operation of step S0706 in Fig. 7 to perform to determine in step S1805.To provide switching
Such mode to the pattern (in step S1805 be) provided for editing image provides the feelings determined in step S1805
Under condition, smart phone 2 enters step S1806.Not provide the pattern being switched to for editing image (in step S1805
No) such mode step S1805 provide determine in the case of, smart phone 2 returns step S1804.
With the situation providing the such mode being switched to the pattern for editing image to determine in the offer of step S1805 it is
User provides in order to start to edit the situation of the input of image.Not provide the such of the pattern that is switched to for editing image
Mode provides situation about determining to be the situation that user does not perform operation in step S1805.Therefore, the feelings of operation are not performed user
Under condition, smart phone 2 continues to put out initial pictures, and waits that user is in order to start the input of the editor of image.
In step S1806, smart phone 2 performs for output for editing the process of the output image of image.In step
Processing of S1806 is corresponding with the process of step S0707 in the figure 7.Additionally, output image is exported by smart phone 2, and
And step S0708 the most in the figure 7 accepts the operation of user.
In step S1807, smart phone 2 closes the operation execution then editor's image still being changed to image execution user
Determine.Corresponding with the process of step S0709 in the figure 7 in processing of step S1807.Smart phone 2 closes then for editor
Image still changes the operation of image execution user's step S0708 in the figure 7 and performs to determine.
Thering is provided in such mode of the operation for changing image (in the change image of step S1807) execution user
In the case of determining, smart phone 2 enters step S1808.To hold for editor's image (in editor's image of step S1807)
Such mode of the operation of row user provides in the case of determining, smart phone 2 enters step S1810.
In step S1808, smart phone 2 performs the place for calculating the amount of movement owing to operating the presumptive area caused
Reason.Corresponding with the process of step S0710 in the figure 7 in processing of step S1808.In step S1808, smart phone 2 based on
Slide performed by user calculates the amount of movement for moving presumptive area, and changes this presumptive area.
In step S1809, smart phone 2 performs the process for updating presumptive area.Step S1809 process with
The process of step S0711 in Fig. 7 is corresponding.In step S1809, smart phone 2 is by Figure 10 A, Figure 10 B, Figure 10 C and Figure 10 D
Presumptive area T move to the position corresponding with the amount of movement calculated in step S1808, and by this presumptive area T from just
The location updating of beginning image is the position corresponding with the slide of user.
In step S1810, smart phone 2 performs the place being used for calculating and tripleplane as the coordinate of operation object
Reason.Corresponding with the process of step S0712 in the figure 7 in processing of step S1810.In step S1810, smart phone 2 calculates
Coordinate whole day ball image on corresponding with the point specified by the clicking operation of user.
In step S1811, smart phone 2 performs the process for calculating the pixel as fuzzy object.Such as, Intelligent electric
Words 2 have so that the editing mode table corresponding with each pixel about whether the flag data providing fuzzy object.Editing mode
Table represents whether each pixel exports with fringe.Smart phone 2 is with reference to editing mode table, and determine in output image is each
Whether pixel exports with fringe, and export image.It is to say, the process in step S1811 is for more new edited
The process of state table.The clicking operation of user provides for as shown in Figure 17 A fuzzy or cancellation as seen in this fig. 17b
In the case of operation, smart phone 2 updates editing mode table based on this operation.
In step S1812, smart phone 2 performs the process for producing editor's image.Step S1812 process with
The process of step S0714 in Fig. 7 is corresponding.
In step S1813, smart phone 2 performs for producing the process changing image.Step S1813 process with
The process of step S0715 in Fig. 7 is corresponding.
Owing to the process in step S1812 and step S1813, smart phone 2 produces output image, and perform to
The output at family.
Smart phone 2 returns step S1807, and repeats the process of previously illustrated.
In the case of the process of step S1812 and step S1813 provides fuzzy object based on editing mode table, Intelligent electric
Words 2 perform the Fuzzy Processing shown in such as Figure 17 A and output.
Smart phone 2 output feels with this user that to the image of user such mode of the smooth reproduction of animation is by every 1
Second 30 or the output of more multiframe.Expect smart phone 2 with user feel such mode of particularly smooth reproduction by every 1 second 60 or
More multiframe performs output.Here, the frame per second of output may be such that every 1 second 60 frames change into the most every 1 second 59.94 frames.
Here, the process in step S1812 and step S1813 is not limited to use in so that smart phone 2 performs Fuzzy Processing
Process with output.
Such as, smart phone 2 is had and is carried by all pixels that Fuzzy Processing is applied to image to be output in advance
The image of confession and by the image not applying Fuzzy Processing to be provided.Smart phone 2 is by selecting by based on editor simultaneously
State table performs the image of Fuzzy Processing offer or exports each pixel by not performing the image of Fuzzy Processing offer.Intelligence
Phone 2 can reduce for the amount of calculation exporting image by first carrying out Fuzzy Processing in advance.It is to say, smart phone 2 is permissible
Export by performing the high speed image selecting to realize the most every 1 second 60 frames with output simultaneously of each pixel.
Additionally, such as, in the case of selecting and exporting each pixel simultaneously, smart phone 2 can store output figure
Picture.In the case of user not executive editor operates, smart phone 2 exports the image stored.Owing to storage, it is not necessary to be used for
Selecting and produce the process of each pixel of image to be output, therefore, smart phone 2 can reduce amount of calculation.Therefore, intelligence
Phone 2 storage can export image, and thus, it is possible to realize the high speed image output of the most every 1 second 60 frames.
Here, output image is not limited to the image etc. shown in Figure 11.For example, it is possible to change editor's image or change image
Shape, position, size or scope.
Figure 19 A and Figure 19 B is for illustrating that changing output according to an embodiment of the invention (such as changes the position of image
Put or direction etc.) the diagram of an example.
Messaging device as an example of the equipment for showing output image is such as smart phone 2.Will
As example, smart phone 2 is described following.
Figure 19 A is the diagram of the example illustrating the attitude changing smart phone 2 according to an embodiment of the invention.
Such as, change image and export position 7 before changing, as shown in Figure 19 A.As example, figure will be described following
The situation of 19A.
Such as, user changes the attitude of smart phone 2 in the direction of rotation shown in Figure 19 A.State sensor in Fig. 6
The attitude of 2H4 detection smart phone 2.Output image is entered by smart phone 2 attitude based on the smart phone 2 as testing result
Row rotates and exports.Smart phone 2 can based on detection result change for export change image region position or
Direction.
Figure 19 B is to illustrate that changing display based on testing result according to an embodiment of the invention changes the region of image
The diagram of one example in position or direction.
In the case of user as shown in Figure 19 A rotary intelligent phone 2, smart phone 2 changes image by being used for exporting
The position in region is shown as the position change of position 7 before the change from Figure 19 A be that the first change position 71 or second changes
Position 72.
Here, changing image can be to change the direction of output (namely based on testing result as shown in Figure 19 B
Say, rotate to the state shown in Figure 19 B from the state of Figure 19 A) condition time output.
Smart phone 2 result based on detection changes position or direction for exporting the region changing image.Namely
Saying, even if the attitude of smart phone 2 changes, smart phone 2 can also export on user-friendly position or direction
Image, in order to export this image according to this attitude.
Additionally, about changing the position of image or the change in direction, smart phone 2 can show this change image, thus
Recover during this change.
<functional configuration>
Figure 20 is the frame of an example of the functional configuration illustrating image capturing apparatus according to an embodiment of the invention
Figure.
Image capturing apparatus 10 has image picking-up apparatus 1 and smart phone 2.Image capturing apparatus 10 has the first image
Photographing section 1F1, the second image taking part 1F2 and whole day ball image producing part 1F3.Image capturing apparatus 10 has image
Fetching portion 2F1, generation part 2F2, input/output part 2F3, detection part 2F4, storage part 2F5 and control part
2F6。
First image taking part 1F1 and the second image taking part 1F2 shoot and produce as whole day ball image
The image of material.The first image pickup section is realized by the front side image capturing element 1H1 in such as Fig. 2 A, Fig. 2 B and Fig. 2 C etc.
Divide 1F1.The second image taking part is realized by the rear image capturing element 1H2 in such as Fig. 2 A, Fig. 2 B and Fig. 2 C etc.
1F2.Image as the material of whole day ball image is the hemisphere image shown in such as Fig. 4 A or Fig. 4 B.
Whole day ball image producing part 1F3 generates output to the image (such as whole day ball image) of smart phone 2.Pass through example
As the graphics processing unit 1H7 etc. in Fig. 5 realizes whole day ball image producing part 1F3.Whole day ball image producing part 1F3 is from
Hemisphere image captured by one image taking part 1F1 and the second image taking part 1F2 produces whole day ball image.
Image acquisition section 2F1 obtains view data (such as whole day ball image) from image picking-up apparatus 1.By such as scheming
Network I/F 2H6 etc. in 6 realizes image acquisition section 2F1.Image acquisition section 2F1 performs to be used for so that smart phone 2 obtains
Take the process of view data (such as whole day ball image).
Produce part 2F2 to perform for producing every kind of image and about producing the necessary every kind of process calculated of image.
Produce part 2F2 and there is change image producing part 2F21 and editor's image producing part 2F22.By the CPU 2H5 in Fig. 6
Part 2F2 is produced Deng realization.
Change image producing part 2F21 and perform the process for performing to change the generation of image.Change image producing part
2F21 obtains such as view data and editing mode table from storage part 2F5.Change image producing part 2F21 based on acquired
Editing mode table and view data produce change image.
Editor's image producing part 2F22 performs the process of the generation for executive editor's image.Editor's image producing part
2F22 obtains such as view data and editing mode table from storage part 2F5.Editor's image producing part 2F22 is based on acquired
Editing mode table and view data produce editor image.
The coordinate that operation in the case of producing the calculating of part 2F2 and performing click or slide user associates, and
And it is stored as editing mode table.Can be stored in storage part 2F5 additionally, produce image produced by part 2F2, and
And take out according to processing.
Produce part 2F2 and can produce every kind of image based on the testing result obtained from detection part 2F4.
Input/output part 2F3 performs the process of the operation for inputting user.Input/output part 2F3 makes user
Perform for exporting the process of image produced by generation part 2F2.Real by the input-output apparatus 2H3 in such as Fig. 6 etc.
Existing input/output part 2F3.
Detection part 2F4 performs the process of the attitude for detecting smart phone 2.Sensed by the state in such as Fig. 6
Device 2H4 etc. realize detection part 2F4.
The storage acquired or every kind of information of generation of part 2F5 storage smart phone 2.Storage part 2F5 has such as to be compiled
Collect state table storage part 2F51 and image storage section 2F52.By attached storage device 2H1 in such as Fig. 6 or primary storage
Equipment 2H2 etc. realize storage part 2F5.
The storage part 2F51 storage of editing mode table represents the data of the table of the pixel performing Fuzzy Processing.
Image storage section 2F52 stores the whole day ball image acquired in image acquisition section 2F1, generation part 2F2 is produced
Raw output image etc..
Control part 2F6 and control every kind of assembly provided in smart phone 2.Control part 2F6 and control every kind of assembly, and
And hereby it is achieved that every kind of process, for the process etc. assisting every kind to process.Realize controlling by the CPU 2H5 in such as Fig. 6 etc.
Part 2F6.
Here, disposed of in its entirety is not limited to the situation shown in Fig. 7.Such as, the equipment in addition to the equipment shown in Fig. 7 is permissible
Process the part or all of of each process.
Smart phone 2 produces editor's image based on the whole day ball image obtained from image picking-up apparatus 1 etc. and changes image.
Editor's image is the image for exporting presumptive area determined by presumptive area T, and makes user executive editor operate (example
As fuzzy or elimination obscure).Change image and be intended that user performs for changing the position of presumptive area T, size or model
The image of the operation enclosed etc..Smart phone 2 exports at least to be had editor's image and changes the output image of image.Output image tool
Having editor's image and change image, and thus, smart phone 2 is so that user executive editor (the fuzzyyest) and passing through
This change image changes the region exported in this editor's image simultaneously.Therefore, user, mould is performed for whole day ball image etc.
In the case of sticking with paste operation, smart phone 2 can export the image for convenient operation.Therefore, smart phone 2 output has editor
Image and the output image of change image, and thus, user can be easily performed the operation of image.
Here, conventional programming language (such as compilation, C, C++, C# or Java (registered trade mark)), object-oriented can be passed through
Programming language etc. described in computer executable program realize smart phone 2.Program can be stored in record medium (example
Such as ROM or electrically erasable ROM (EEPROM)) in and by its issue.Program can be stored in record medium (such as
Erasable programmable ROM (EPROM)) in and by its issue.Program can be stored in record medium (such as flash memory, floppy disk,
CD-ROM, CD-RW, DVD-ROM, DVD-RAM, DVD-RW etc.) in and by its issue.Program can be stored in device-readable
Issue in record medium (such as Blu-ray disc, SD (registered trade mark) card or MO) or by telecommunication line.
Here, the image in embodiment is not limited to rest image.Such as, image can be animation.
Furthermore, it is possible to realize embodiment by such as programming device (PD) (such as field programmable gate array (FPGA))
In each process part or all of.Furthermore, it is possible to realize each place in embodiment by special IC (ASIC)
That manages is part or all of.
Although the present invention preferred actual example is more than described in detail, but the invention is not restricted to these particular implementations
It is possible in example, and the scope of the essence of the present invention stated in the claims of various change and amendment.
[annex]
<illustrative embodiment of information processing method, messaging device and program>
At least one illustrative embodiment of the present invention can relate to information processing method, messaging device and journey
Sequence.
At least one illustrative embodiment of the present invention can be devoted to promote to perform the image manipulation for user.
According at least one illustrative embodiment of the present invention, it is provided that at a kind of information making computer disposal image
Reason method, it is characterised in that described computer is performed: obtaining step, is used for obtaining described image;Produce step, be used for
Produce the presumptive area for editing described image edits image and for changing changing of described presumptive area to be output
Become image;And output step, for output, at least there is described editor's image and the output image of described change image.
Illustrative embodiment (1) is a kind of information processing method for making computer disposal image, wherein, described figure
As processing method makes described computer perform: obtaining step, it is used for obtaining described image;Produce step, be used for for generation
Edit editor's image of the presumptive area of described image and for changing the change image of described presumptive area to be output;With
And output step, for output, at least there is described editor's image and the output image of described change image.
Illustrative embodiment (2) is the information processing method as described in illustrative embodiment (1), wherein, performs: compile
Collect region input step, for by using described editor's Image Acquisition as the editing area of the target area of described editor;
And edit step, it is used for editing described editing area.
Illustrative embodiment (3) is the image processing method as described in illustrative embodiment (2), wherein, described editor
Step is the step for obscuring described editing area.
Illustrative embodiment (4) is the information processing method as described in illustrative embodiment (3), wherein, produces just
Broad image produced by the good acquisition image obtained in described obtaining step and Fuzzy Processing, and by selecting
For described editing area described broad image pixel and for described in region in addition to described editing area
The pixel obtaining image exports described output image.
Illustrative embodiment (5) is the information processing side as described by any one in illustrative embodiment (2) to (4)
Method, wherein, performs: specify region input step, for obtain the part specifying the image exported with described editor's image or
The appointment region in whole regions;And removal process, for eliminating the described editing and processing performed for described appointment region.
Illustrative embodiment (6) is the information processing side as described by any one in illustrative embodiment (1) to (5)
Method, wherein, performs operation input step: for by using described change Image Acquisition to be used for changing, zooming in or out with institute
State the operation of the described presumptive area that editor's image is exported.
Illustrative embodiment (7) is the information processing method as described in illustrative embodiment (6), wherein, based on institute
State operation to perform for determining the determination step at viewpoint position and visual angle, and described determine based on the district indicated by described operation
Territory changes one of described viewpoint position and described visual angle.
Illustrative embodiment (8) is the information processing side as described by any one in illustrative embodiment (1) to (7)
Method, wherein, performs: detecting step, for detecting the attitude of the equipment showing described output image;And change step, it is used for
The result of the detection carried out based on described detecting step changes position or the direction of described change image.
Illustrative embodiment (9) is a kind of messaging device processing image, and wherein, described image processing equipment has
Have: obtaining widget, be used for obtaining described image;Produce parts, for producing the volume of the presumptive area for editing described image
Collect image and for changing the change image of described presumptive area to be output;And output block, at least have for output
There are described editor's image and the output image of described change image.
Illustrative embodiment (10) is a kind of program for making computer disposal image, and wherein, described program makes
Described computer performs: obtaining step, is used for obtaining described image;Produce step, for producing for editing described image
Editor's image of presumptive area and for changing the change image of described presumptive area to be output;And output step, use
In output, at least there is described editor's image and the output image of described change image.
According to illustrative embodiment the most of the present invention, can promote to perform the image manipulation for user.
Although having described illustrative embodiment and the particular example of the present invention with reference to the accompanying drawings, but the invention is not restricted to appoint
What illustrative embodiment and particular example, and without departing from the scope of the invention, can change, revise or combine
Illustrative embodiment and particular example.
This application claims priority based on the Japanese patent application No.2014-054782 submitted on March 18th, 2014
Interests, its complete content is incorporated by reference into this.
Claims (10)
1., for making an information processing method for computer disposal image, wherein, described image processing method makes described
Computer performs: obtaining step: obtain described image;Produce step: produce editor's figure of the presumptive area editing described image
As and change the change image of described presumptive area to be output;And output step, output at least has described editor figure
Picture and the output image of described change image.
2. information processing method as claimed in claim 1, wherein, performs: editing area input step: by using described volume
Collect the Image Acquisition editing area as the target area of described editor;And edit step: edit described editing area.
3. image processing method as claimed in claim 2, wherein, described edit step is the step of fuzzy described editing area
Suddenly.
4. information processing method as claimed in claim 3, wherein, what generation had the most obtained in described obtaining step obtains
Take broad image produced by image and Fuzzy Processing, and by selecting the described broad image for described editing area
Pixel and export described output figure for the pixel of the described acquisition image in region in addition to described editing area
Picture.
5. information processing method as claimed in claim 2, wherein, performs: specify region input step: obtain and specify with described
The appointment region in all or part of region of the image that editor's image is exported;And removal process, eliminate for described finger
Determine the editing and processing performed by region.
6. information processing method as claimed in claim 1, wherein, performs operation input step: obtain and change described in using
Become image modification, zoom in or out the operation of the described presumptive area exported with described editor's image.
7. information processing method as claimed in claim 6, wherein, performs to determine viewpoint position and visual angle based on described operation
Determine step, and described determine based on one of viewpoint position and described visual angle described in the area change indicated by described operation.
8. information processing method as claimed in claim 1, wherein, performs: detecting step: detection shows described output image
The attitude of equipment;And change step: position based on the testing result described change image of change that described detecting step is carried out
Or direction.
9. processing a messaging device for image, wherein, described image processing equipment has: fetching portion, and it obtains institute
State image;Producing part, it produces editor's image of the presumptive area editing described image and changes to be output described pre-
Determine the change image in region;And output part, its output at least has described editor's image and the output of described change image
Image.
10. for making a program for computer disposal image, wherein, described program makes described computer perform: obtain
Step: obtain described image;Produce step: produce editor's image of the presumptive area editing described image and change to be output
The change image of described presumptive area;And output step: output at least has described editor's image and described change image
Output image.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910830851.1A CN110456967B (en) | 2014-03-18 | 2015-03-10 | Information processing method, information processing apparatus, and program |
Applications Claiming Priority (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2014054782A JP5835383B2 (en) | 2014-03-18 | 2014-03-18 | Information processing method, information processing apparatus, and program |
JP2014-054782 | 2014-03-18 | ||
PCT/JP2015/057609 WO2015141605A1 (en) | 2014-03-18 | 2015-03-10 | Information processing method, information processing device, and program |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910830851.1A Division CN110456967B (en) | 2014-03-18 | 2015-03-10 | Information processing method, information processing apparatus, and program |
Publications (2)
Publication Number | Publication Date |
---|---|
CN106133794A true CN106133794A (en) | 2016-11-16 |
CN106133794B CN106133794B (en) | 2021-11-23 |
Family
ID=54144574
Family Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201580013724.2A Active CN106133794B (en) | 2014-03-18 | 2015-03-10 | Information processing method, information processing apparatus, and program |
CN201910830851.1A Active CN110456967B (en) | 2014-03-18 | 2015-03-10 | Information processing method, information processing apparatus, and program |
Family Applications After (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910830851.1A Active CN110456967B (en) | 2014-03-18 | 2015-03-10 | Information processing method, information processing apparatus, and program |
Country Status (6)
Country | Link |
---|---|
US (3) | US9760974B2 (en) |
EP (1) | EP3120327A4 (en) |
JP (1) | JP5835383B2 (en) |
CN (2) | CN106133794B (en) |
CA (1) | CA2941469C (en) |
WO (1) | WO2015141605A1 (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110278368A (en) * | 2018-03-15 | 2019-09-24 | 株式会社理光 | Image processing device, photography system, image processing method |
CN110537199A (en) * | 2017-05-12 | 2019-12-03 | 松下知识产权经营株式会社 | Image processing apparatus and image processing method |
US10845942B2 (en) | 2016-08-31 | 2020-11-24 | Sony Corporation | Information processing device and information processing method |
Families Citing this family (16)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10255664B2 (en) * | 2014-07-04 | 2019-04-09 | Sony Corporation | Image processing device and method |
JP6518069B2 (en) * | 2015-01-09 | 2019-05-22 | キヤノン株式会社 | Display device, imaging system, display device control method, program, and recording medium |
JP5987931B2 (en) | 2015-02-09 | 2016-09-07 | 株式会社リコー | Video display system, information processing apparatus, video display method, video display program, video processing apparatus, video processing method, and video processing program |
USD791146S1 (en) * | 2015-09-25 | 2017-07-04 | Sz Dji Osmo Technology Co., Ltd. | Display screen or portion thereof with animated graphical user interface |
US10732809B2 (en) | 2015-12-30 | 2020-08-04 | Google Llc | Systems and methods for selective retention and editing of images captured by mobile image capture device |
EP3422696A4 (en) | 2016-02-24 | 2019-03-13 | Ricoh Company, Ltd. | Image processing device, image processing system, and program |
TWI567691B (en) * | 2016-03-07 | 2017-01-21 | 粉迷科技股份有限公司 | Method and system for editing scene in three-dimensional space |
JP7133900B2 (en) * | 2016-11-17 | 2022-09-09 | 株式会社Nttファシリティーズ | Shooting position specifying system, shooting position specifying method, and program |
JP6885158B2 (en) * | 2017-03-31 | 2021-06-09 | 株式会社リコー | Image processing device, photographing device, and image processing method |
JP7122729B2 (en) * | 2017-05-19 | 2022-08-22 | 株式会社ユピテル | Drive recorder, display device and program for drive recorder |
JP2019099219A (en) * | 2017-12-01 | 2019-06-24 | 株式会社イオグランツ | Package body |
JP7268372B2 (en) * | 2019-01-31 | 2023-05-08 | 株式会社リコー | Imaging device |
US11436776B2 (en) * | 2019-03-15 | 2022-09-06 | Canon Kabushiki Kaisha | Information processing apparatus and control method thereof |
US20240112305A1 (en) * | 2021-02-11 | 2024-04-04 | Cosm,Inc. | Real-time fiducials and event-driven graphics in panoramic video |
CN117616447A (en) | 2021-07-09 | 2024-02-27 | 三星电子株式会社 | Electronic devices and methods of operating electronic devices |
JP7492497B2 (en) * | 2021-12-27 | 2024-05-29 | 株式会社コロプラ | PROGRAM, INFORMATION PROCESSING METHOD, AND INFORMATION PROCESSING APPARATUS |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20070183000A1 (en) * | 2005-12-16 | 2007-08-09 | Ori Eisen | Methods and apparatus for securely displaying digital images |
CN101305596A (en) * | 2005-11-11 | 2008-11-12 | 索尼株式会社 | Image processing device, image processing method, program thereof, recording medium containing the program, and imaging device |
US20100162163A1 (en) * | 2008-12-18 | 2010-06-24 | Nokia Corporation | Image magnification |
JP2012029179A (en) * | 2010-07-27 | 2012-02-09 | Nippon Seiki Co Ltd | Peripheral image display device and display method thereof |
CN102483859A (en) * | 2009-09-29 | 2012-05-30 | 索尼计算机娱乐公司 | Panoramic image display device and panoramic image display method |
JP2014010611A (en) * | 2012-06-29 | 2014-01-20 | Ricoh Co Ltd | Transmission device, image sharing system, transmission method, and program |
CN104246832A (en) * | 2012-05-14 | 2014-12-24 | 哈特弗罗公司 | Method and system for providing information from a patient-specific model of blood flow |
Family Cites Families (53)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5341466A (en) * | 1991-05-09 | 1994-08-23 | New York University | Fractal computer user centerface with zooming capability |
US6121966A (en) * | 1992-11-02 | 2000-09-19 | Apple Computer, Inc. | Navigable viewing system |
JPH06325144A (en) | 1993-05-12 | 1994-11-25 | Toppan Printing Co Ltd | Layout design system |
JPH10340075A (en) | 1997-06-06 | 1998-12-22 | Matsushita Electric Ind Co Ltd | Image display method |
US7620909B2 (en) * | 1999-05-12 | 2009-11-17 | Imove Inc. | Interactive image seamer for panoramic images |
US7149549B1 (en) * | 2000-10-26 | 2006-12-12 | Ortiz Luis M | Providing multiple perspectives for a venue activity through an electronic hand held device |
JP2002329212A (en) | 2001-05-02 | 2002-11-15 | Sony Corp | Device and method for information processing, recording medium, and program |
JP4439763B2 (en) * | 2001-07-04 | 2010-03-24 | 株式会社リコー | Image recording / reproducing system and image recording / reproducing method |
JP2003132362A (en) * | 2001-10-22 | 2003-05-09 | Sony Corp | Information communication system, information communication method and computer program |
JP3641747B2 (en) * | 2001-10-23 | 2005-04-27 | ヴイストン株式会社 | Image display method and image display apparatus |
US6833843B2 (en) * | 2001-12-03 | 2004-12-21 | Tempest Microsystems | Panoramic imaging and display system with canonical magnifier |
JP2004254256A (en) * | 2003-02-24 | 2004-09-09 | Casio Comput Co Ltd | Camera device, display method and program |
FR2854265B1 (en) * | 2003-04-28 | 2006-05-19 | Snecma Moteurs | OPTIMIZING ERGONOMICS WHEN MOVING A VIRTUAL MANNEQUIN |
JP4635437B2 (en) | 2004-01-07 | 2011-02-23 | ソニー株式会社 | Electronic apparatus and image display method |
JP4756876B2 (en) | 2004-06-09 | 2011-08-24 | キヤノン株式会社 | Image display control device, image display control method, program, and storage medium |
US7839446B2 (en) * | 2005-08-30 | 2010-11-23 | Olympus Corporation | Image capturing apparatus and image display apparatus including imparting distortion to a captured image |
JP4916237B2 (en) | 2005-09-16 | 2012-04-11 | 株式会社リコー | Image display apparatus, image display method, program for causing computer to execute the method, and image display system |
WO2007055336A1 (en) * | 2005-11-11 | 2007-05-18 | Sony Corporation | Image processing device, image processing method, program thereof, recording medium containing the program, and imaging device |
US8477154B2 (en) * | 2006-03-20 | 2013-07-02 | Siemens Energy, Inc. | Method and system for interactive virtual inspection of modeled objects |
US7956847B2 (en) | 2007-01-05 | 2011-06-07 | Apple Inc. | Gestures for controlling, manipulating, and editing of media files using touch sensitive devices |
JP4841449B2 (en) | 2007-01-29 | 2011-12-21 | ソニー株式会社 | Imaging apparatus, image editing method, and program |
JP4345829B2 (en) * | 2007-03-09 | 2009-10-14 | ソニー株式会社 | Image display system, image display apparatus, image display method, and program |
US8259208B2 (en) * | 2008-04-15 | 2012-09-04 | Sony Corporation | Method and apparatus for performing touch-based adjustments within imaging devices |
US8214766B1 (en) | 2008-07-09 | 2012-07-03 | Adobe Systems Incorporated | Method and system for preview control for image adjustment |
EP2207342B1 (en) | 2009-01-07 | 2017-12-06 | LG Electronics Inc. | Mobile terminal and camera image control method thereof |
JP5460173B2 (en) * | 2009-08-13 | 2014-04-02 | 富士フイルム株式会社 | Image processing method, image processing apparatus, image processing program, and imaging apparatus |
WO2011055451A1 (en) | 2009-11-06 | 2011-05-12 | パイオニア株式会社 | Information processing device, method therefor, and display device |
KR101662846B1 (en) | 2010-05-12 | 2016-10-06 | 삼성전자주식회사 | Apparatus and method for generating bokeh in out-of-focus shooting |
JP5645626B2 (en) | 2010-12-06 | 2014-12-24 | キヤノン株式会社 | Display control apparatus, display control method, program, and storage medium |
JP5701040B2 (en) * | 2010-12-14 | 2015-04-15 | キヤノン株式会社 | Image processing apparatus, image processing method, and program |
JP5678324B2 (en) | 2011-02-10 | 2015-03-04 | パナソニックIpマネジメント株式会社 | Display device, computer program, and display method |
JP5561214B2 (en) * | 2011-03-15 | 2014-07-30 | オムロン株式会社 | Image processing apparatus and image processing program |
US8898630B2 (en) * | 2011-04-06 | 2014-11-25 | Media Direct, Inc. | Systems and methods for a voice- and gesture-controlled mobile application development and deployment platform |
JP2012249175A (en) * | 2011-05-30 | 2012-12-13 | Olympus Imaging Corp | Imaging apparatus, display method, and program |
CN103186324A (en) | 2011-12-29 | 2013-07-03 | 富泰华工业(深圳)有限公司 | Image editing system and image editing method |
US8693776B2 (en) | 2012-03-02 | 2014-04-08 | Adobe Systems Incorporated | Continuously adjustable bleed for selected region blurring |
US9041727B2 (en) | 2012-03-06 | 2015-05-26 | Apple Inc. | User interface tools for selectively applying effects to image |
JP6303270B2 (en) * | 2012-05-18 | 2018-04-04 | 株式会社リコー | Video conference terminal device, video conference system, video distortion correction method, and video distortion correction program |
US10037820B2 (en) * | 2012-05-29 | 2018-07-31 | Medical Avatar Llc | System and method for managing past, present, and future states of health using personalized 3-D anatomical models |
JP6186775B2 (en) | 2012-05-31 | 2017-08-30 | 株式会社リコー | Communication terminal, display method, and program |
JP6006536B2 (en) * | 2012-06-01 | 2016-10-12 | 任天堂株式会社 | Information processing program, information processing apparatus, information processing system, and panoramic video display method |
KR20140028311A (en) | 2012-08-28 | 2014-03-10 | 삼성전자주식회사 | Method for setting a selecting region and an electronic device thereof |
US20140062917A1 (en) | 2012-08-29 | 2014-03-06 | Samsung Electronics Co., Ltd. | Method and apparatus for controlling zoom function in an electronic device |
JP6153318B2 (en) * | 2012-11-30 | 2017-06-28 | キヤノン株式会社 | Image processing apparatus, image processing method, image processing program, and storage medium |
US9124762B2 (en) * | 2012-12-20 | 2015-09-01 | Microsoft Technology Licensing, Llc | Privacy camera |
KR20140086632A (en) * | 2012-12-28 | 2014-07-08 | 삼성디스플레이 주식회사 | Image processing device and display device having them |
JP6075066B2 (en) | 2012-12-28 | 2017-02-08 | 株式会社リコー | Image management system, image management method, and program |
KR101999140B1 (en) * | 2013-01-03 | 2019-07-11 | 삼성전자주식회사 | Apparatus and method for shooting and processing an image in camera device and portable terminal having a camera |
KR101988313B1 (en) | 2013-01-04 | 2019-06-12 | 엘지전자 주식회사 | Mobile terminal and controlling method thereof, and recording medium thereof |
US20150046299A1 (en) * | 2013-08-12 | 2015-02-12 | Sap Ag | Inventory Assessment with Mobile Devices |
EP3331231A1 (en) * | 2013-08-28 | 2018-06-06 | Ricoh Company Ltd. | Image processing apparatus, image processing method, and imaging system |
US9253415B2 (en) * | 2013-11-27 | 2016-02-02 | Adobe Systems Incorporated | Simulating tracking shots from image sequences |
US9210321B2 (en) * | 2013-12-05 | 2015-12-08 | Here Global B.V. | Method and apparatus for a shutter animation for image capture |
-
2014
- 2014-03-18 JP JP2014054782A patent/JP5835383B2/en active Active
-
2015
- 2015-03-10 WO PCT/JP2015/057609 patent/WO2015141605A1/en active Application Filing
- 2015-03-10 CN CN201580013724.2A patent/CN106133794B/en active Active
- 2015-03-10 CA CA2941469A patent/CA2941469C/en not_active Expired - Fee Related
- 2015-03-10 CN CN201910830851.1A patent/CN110456967B/en active Active
- 2015-03-10 EP EP15765011.0A patent/EP3120327A4/en not_active Withdrawn
- 2015-10-28 US US14/924,871 patent/US9760974B2/en active Active
-
2017
- 2017-08-08 US US15/671,338 patent/US10304157B2/en active Active
-
2019
- 2019-04-02 US US16/372,764 patent/US20190228500A1/en not_active Abandoned
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101305596A (en) * | 2005-11-11 | 2008-11-12 | 索尼株式会社 | Image processing device, image processing method, program thereof, recording medium containing the program, and imaging device |
US20070183000A1 (en) * | 2005-12-16 | 2007-08-09 | Ori Eisen | Methods and apparatus for securely displaying digital images |
US20100162163A1 (en) * | 2008-12-18 | 2010-06-24 | Nokia Corporation | Image magnification |
CN102483859A (en) * | 2009-09-29 | 2012-05-30 | 索尼计算机娱乐公司 | Panoramic image display device and panoramic image display method |
JP2012029179A (en) * | 2010-07-27 | 2012-02-09 | Nippon Seiki Co Ltd | Peripheral image display device and display method thereof |
CN104246832A (en) * | 2012-05-14 | 2014-12-24 | 哈特弗罗公司 | Method and system for providing information from a patient-specific model of blood flow |
JP2014010611A (en) * | 2012-06-29 | 2014-01-20 | Ricoh Co Ltd | Transmission device, image sharing system, transmission method, and program |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10845942B2 (en) | 2016-08-31 | 2020-11-24 | Sony Corporation | Information processing device and information processing method |
CN110537199A (en) * | 2017-05-12 | 2019-12-03 | 松下知识产权经营株式会社 | Image processing apparatus and image processing method |
CN110278368A (en) * | 2018-03-15 | 2019-09-24 | 株式会社理光 | Image processing device, photography system, image processing method |
US10855916B2 (en) | 2018-03-15 | 2020-12-01 | Ricoh Company, Ltd. | Image processing apparatus, image capturing system, image processing method, and recording medium |
Also Published As
Publication number | Publication date |
---|---|
US9760974B2 (en) | 2017-09-12 |
JP5835383B2 (en) | 2015-12-24 |
US20190228500A1 (en) | 2019-07-25 |
CN110456967A (en) | 2019-11-15 |
US10304157B2 (en) | 2019-05-28 |
US20160048942A1 (en) | 2016-02-18 |
EP3120327A1 (en) | 2017-01-25 |
CN110456967B (en) | 2023-05-02 |
WO2015141605A1 (en) | 2015-09-24 |
CA2941469A1 (en) | 2015-09-24 |
JP2015176559A (en) | 2015-10-05 |
EP3120327A4 (en) | 2017-01-25 |
US20170337658A1 (en) | 2017-11-23 |
CA2941469C (en) | 2018-05-08 |
CN106133794B (en) | 2021-11-23 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN106133794A (en) | Information processing method, messaging device and program | |
EP3120328B1 (en) | Information processing method, information processing device, and program | |
CN103460684A (en) | Image processing apparatus, imaging system, and image processing system | |
US10701286B2 (en) | Image processing device, image processing system, and non-transitory storage medium | |
JP6350695B2 (en) | Apparatus, method, and program | |
JP2021087043A (en) | Operation method for information processing device, program, and information processing device | |
JP6583486B2 (en) | Information processing method, information processing program, and information processing apparatus | |
JP6128185B2 (en) | Apparatus, method, and program | |
JP6777208B2 (en) | program | |
CN114339029B (en) | Shooting method and device and electronic equipment | |
JP2016021267A (en) | Device, method, and program | |
HK1228553B (en) | Information processing method, information processing device, and program | |
HK1228553A1 (en) | Information processing method, information processing device, and program | |
JP2017224330A (en) | Device, method, and program | |
CN116957926A (en) | Image amplification method and device and electronic equipment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |