CN111192263B - Intelligent energy-saving indoor people counting method based on machine vision - Google Patents
Intelligent energy-saving indoor people counting method based on machine vision Download PDFInfo
- Publication number
- CN111192263B CN111192263B CN202010023087.XA CN202010023087A CN111192263B CN 111192263 B CN111192263 B CN 111192263B CN 202010023087 A CN202010023087 A CN 202010023087A CN 111192263 B CN111192263 B CN 111192263B
- Authority
- CN
- China
- Prior art keywords
- image
- subarea
- detected
- value
- machine vision
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/90—Determination of colour characteristics
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10004—Still image; Photographic image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30196—Human being; Person
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30242—Counting objects in image
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02D—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
- Y02D10/00—Energy efficient computing, e.g. low power processors, power management or thermal management
Landscapes
- Engineering & Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Quality & Reliability (AREA)
- Image Processing (AREA)
Abstract
The application discloses a machine vision-based intelligent energy-saving indoor people counting method, which comprises the following steps: s1, acquiring an image to be detected and a background image by using a camera; s2, dividing the background image into a plurality of sub-areas, wherein the sub-areas are provided with reference objects with the same size; acquiring a pixel value of the length of a reference object in each subarea as a calibration measurement value; selecting a pixel value of the length of a reference object in a subarea which is closer to the camera as a calibration standard value; dividing the calibration measured value by the calibration reference value to obtain the calibration coefficient of each subarea; s3, shadow elimination processing, preprocessing, inversion processing and mathematical morphology processing are carried out on the image to be detected, a personnel judgment threshold value is determined, and the threshold value of each subarea is obtained; s7, counting the number of people in each subarea according to the threshold value of each subarea. The intelligent energy-saving indoor people counting method based on the machine vision provided by the application overcomes the problem of poor people counting accuracy based on the machine vision in the prior art.
Description
Technical Field
The application relates to the technical field of computer vision, in particular to a machine vision-based intelligent energy-saving indoor people counting method.
Background
With the rapid development of economy, the demand for energy has also increased considerably. The related data show that the total energy consumption of the buildings in China accounts for more than 20% of the total energy consumption of the society. Wherein, the energy consumption of the air conditioning system is 40 percent on average. The illumination is the second largest consumer, accounting for nearly one fourth of the total energy consumption of the building. With the continuous popularization of intelligent building and energy-saving and emission-reducing concepts, the central air conditioner and the illumination create a comfortable indoor environment for people, and meanwhile, great energy consumption of the central air conditioner and the illumination also attracts great attention. How to reduce the energy consumption of the central air conditioner and the illumination becomes an essential link in energy conservation and emission reduction and green city construction.
At present, most central air conditioners and lighting are unified switches and unified adjustment intensities, and the following problems generally exist: firstly, people walk, the air conditioner and the illumination are not closed, or the people are less, but the air output of the air conditioner is large, so that great energy consumption and waste are caused; secondly, the air conditioner and the illumination can not distinguish the areas with more people and the areas with less people, and the air output of each area is consistent, so that the air output of the air conditioner in the areas with gathered and sparse people is not different, and the central air conditioner does not exert the maximum efficiency.
The energy saving can be realized by controlling the air conditioner and the illumination according to the distribution condition of personnel in different areas in the building. To achieve this, one of the more important is indoor people statistics.
At present, the statistical method for the indoor people is quite common in an infrared detection mode, and an infrared sensor needs infrared emission equipment and receiving equipment and is carried out by means of correlation or reflection of infrared light. When the infrared beam is blocked, the optical signal of the infrared receiver is weakened, and the signal can be sampled after being converted into digital quantity by analog quantity and amplified. However, the system realized by the method needs to be provided with a large number of infrared sensors, has a complex structure, needs to perform complicated calculation on the installation position and then perform complicated wiring, increases implementation cost and energy consumption, and has an unsatisfactory energy-saving effect.
At present, due to the safety requirement, the office areas of many companies are provided with cameras, and compared with the traditional method for infrared detection by using infrared sensors, the machine vision-based people counting method has the advantages of complete software except for the cameras in the detection process, low cost, more convenient people counting and the like.
However, in the process of implementing the embodiment of the application, the inventor of the application finds that the existing people counting method based on machine vision has too low counting accuracy and can not well meet the requirements of practical application.
Therefore, providing a machine vision-based intelligent energy-saving indoor people counting method for improving the accuracy of people counting has become a technical problem to be solved in the field.
Disclosure of Invention
In order to make up the defects of the prior art, the application provides a machine vision-based intelligent energy-saving indoor people counting method.
The technical problems to be solved by the application are realized by the following technical scheme:
an intelligent energy-saving indoor people counting method based on machine vision comprises the following steps:
s1, acquiring an image to be detected and a background image by using a camera; wherein the background image does not include a portrait;
s2, dividing the background image into a plurality of sub-areas, wherein the sub-areas are provided with reference objects with the same size; acquiring a pixel value of the length of a reference object in each subarea as a calibration measurement value; selecting a pixel value of the length of a reference object in a subarea which is closer to the camera as a calibration standard value; dividing the calibration measured value by the calibration reference value to obtain the calibration coefficient of each subarea;
s3, shadow elimination processing is carried out on the image to be detected;
s4, preprocessing, inversion processing and mathematical morphology processing are sequentially carried out on the image to be detected;
s5, carrying out connected region analysis on the image to be detected after mathematical morphology processing, and determining a personnel judgment threshold;
s6, obtaining the threshold value of each subarea according to the personnel judgment threshold value and the calibration coefficient of each subarea obtained in the step S2;
s7, counting the number of people in each subarea according to the threshold value of each subarea.
Further, the obtaining the image to be detected and the background image by using the camera includes: acquiring a video of a monitoring area by using a camera; and selecting a frame of images including the portrait from the image sequence of the video to serve as the images to be detected, and selecting a frame of images not including the portrait to serve as the background images.
Further, the reference object is furniture, and the background image is divided into four sub-areas.
Further, the reference object is a table.
Further, the shadow removal process is performed using the conversion of the color space in step S3.
Further, the shadow elimination processing includes: converting an image to be detected and a background image into an HSV color space from an RGB color space, wherein R is red, G is green, B is blue, H is brightness, S is saturation, and V is tone; shadow elimination is carried out by adopting a formula (1);
in the formula (1), I (x, y) represents a value at (x, y) after the image is binarized, H 1 、S 1 、V 1 Representing the values of the components of the image to be detected in HSV color space, H 2 、S 2 、V 2 Representing the values of components of the background image in HSV color space, H t 、S t 、V t The threshold values of the luminance value, saturation value, and hue value at the time of shadow removal are respectively represented.
Further, preprocessing the image to be detected includes: dividing the image to be detected into an R component image, a G component image and a B component image, and then carrying out binarization operation on each component image to determine a component binarization map for subsequent people counting.
Further, performing inversion processing and mathematical morphology processing on the image to be detected includes: and respectively inverting the determined component binarization map and the binarization map of the background image, subtracting the two images, performing corrosion treatment, and then performing expansion treatment to obtain a connected region of the image to be detected.
Further, in step S5, the communicating region analysis is performed on the image to be detected after morphological processing, including: and counting pixel values of each connected region.
Further, in step S6, the person determination threshold is multiplied by the calibration coefficient of each sub-area obtained in step S2, so as to obtain the threshold of each sub-area.
Further, in step S7, it is determined whether the pixel value of each communication area in each sub-area is greater than the threshold value of the sub-area, and if so, the number of people is counted.
The application has the following beneficial effects:
the intelligent energy-saving indoor people counting method based on machine vision provided by the application overcomes the problem of poor people counting accuracy based on machine vision in the prior art, and is used for intelligent people counting based on an indoor monitoring image, and the method is applied to intelligent energy-saving indoor scenes, and can be used for automatically controlling indoor electrical equipment such as electric lamps, air conditioners and the like by detecting the distribution positions of specific people and people in the indoor, so as to assist in realizing intelligent building application scenes such as energy-saving and environment-friendly intelligent offices, intelligent classrooms, libraries or intelligent meeting rooms.
Drawings
Fig. 1 is a view of each sub-region of the background image division in embodiment 1;
fig. 2 is a background image in embodiment 1;
fig. 3 is an image to be detected in embodiment 1;
FIG. 4 is a histogram of shadows in HSV color space in example 1;
fig. 5 is an image to be detected after eliminating the shadow in example 1;
fig. 6 is a component image of the image to be detected in embodiment 1; wherein:
a: an R component image;
b: g component images;
c: b component image.
Fig. 7 is a component image binarization chart of an image to be detected in embodiment 1; wherein,,
a: r component binarization map;
b: g component binarization map;
c: b component binarization map.
Fig. 8 is an inverse diagram of the B-component image binarization map in embodiment 1;
fig. 9 is an inverse diagram of the background image binarization map in embodiment 1;
fig. 10 is a difference binarization map obtained by inverting the B component binarization map and the binarization map of the background image in example 1, respectively.
Fig. 11 is a connected region after mathematical morphology processing in example 1.
In the figure: 1. sub-areas 1,2, 3, 4.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present application more apparent, the technical solutions of the embodiments of the present application will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present application, and it is apparent that the described embodiments are some embodiments of the present application, but not all embodiments of the present application. The components of the embodiments of the present application generally described and illustrated in the figures herein may be arranged and designed in a wide variety of different configurations.
Thus, the following detailed description of the embodiments of the application, as presented in the figures, is not intended to limit the scope of the application, as claimed, but is merely representative of selected embodiments of the application. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
It should be noted that: like reference numerals and letters denote like items in the following figures, and thus once an item is defined in one figure, no further definition or explanation thereof is necessary in the following figures.
In the description of the present application, it should be noted that, directions or positional relationships indicated by terms such as "upper", "lower", "inner", "outer", etc., are directions or positional relationships based on those shown in the drawings, or those that are conventionally put in use, are merely for convenience of describing the present application and simplifying the description, and do not indicate or imply that the apparatus or elements to be referred to must have a specific direction, be constructed and operated in a specific direction, and thus should not be construed as limiting the present application. Furthermore, the terms "first," "second," "third," and the like are used merely to distinguish between descriptions and should not be construed as indicating or implying relative importance.
In the description of the present application, it should also be noted that, unless explicitly specified and limited otherwise, the terms "disposed", "connected" and "connected" are to be construed broadly, and may be, for example, fixedly connected, detachably connected or integrally connected; can be mechanically or electrically connected; can be directly connected or indirectly connected through an intermediate medium, and can be communication between two elements. The specific meaning of the above terms in the present application will be understood in specific cases by those of ordinary skill in the art.
The raw materials and equipment used in the application are common raw materials and equipment in the field unless specified otherwise; the methods used in the present application are conventional in the art unless otherwise specified.
Unless otherwise defined, all terms used in the specification have the same meaning as commonly understood by one of ordinary skill in the art, but are defined in the specification to be used in the event of a conflict.
The terms "comprising," "including," "containing," "having," or other variations thereof herein are intended to cover a non-closed inclusion, without distinguishing between them. The term "comprising" means that other steps and ingredients may be added that do not affect the end result. The term "comprising" also includes the terms "consisting of …" and "consisting essentially of …". The compositions and methods/processes of the present application comprise, consist of, and consist essentially of the essential elements and limitations described herein, as well as additional or optional ingredients, components, steps, or limitations of any of the embodiments described herein.
As described in the background art, the statistical method of the number of people in machine vision in the prior art has a problem of low statistical accuracy, but the cause of the defect is not clear, and the inventor finds that the cause is: because the camera position is fixed, so when shooting the office area, can be near big or small, if simply utilize mathematical morphology to carry out the detection of image intercommunication region to camera frame image, because "size" of different personnel in different regions is inconsistent, can cause the condition of omission. When the personnel identification is carried out by using the image method, the personnel proportion calibration is firstly carried out, so that the defect is overcome.
An intelligent energy-saving indoor people counting method based on machine vision comprises the following steps:
s1, acquiring an image to be detected and a background image by using a camera; wherein the background image does not include a portrait;
specifically, the obtaining the image to be detected and the background image by using the camera includes: acquiring a video of a monitoring area by using a camera; and selecting a frame of images including the portrait from the image sequence of the video to serve as the images to be detected, and selecting a frame of images not including the portrait to serve as the background images.
It should be noted that the image to be detected and the background image are images of the same area acquired by the camera.
S2, dividing the background image into a plurality of sub-areas, wherein the sub-areas are provided with reference objects with the same size; acquiring a pixel value of the length of a reference object in each subarea as a calibration measurement value; selecting a pixel value of the length of a reference object in a subarea which is closer to the camera as a calibration standard value; dividing the calibration measured value by the calibration reference value to obtain the calibration coefficient of each subarea;
in the present application, the reference object is preferably furniture, and the kind of furniture is not particularly limited in the present application, and the furniture is, for example, a table, but not limited thereto.
In the application, the background image is divided into a plurality of sub-areas. The number of the sub-regions is not particularly limited, and a person skilled in the art may select the sub-regions as needed, and may divide the background image into four regions, six regions, or eight regions, for example.
In an actual scene, the sizes of all the reference objects are the same, but after the imaging by the camera, obvious near-large and far-small phenomena are formed, and the omission of the statistics of the number of people is caused. The application creatively advances the personnel proportion calibration, and obtains the threshold value of each subarea by dividing the subareas, thereby avoiding false detection and improving the accuracy of personnel counting.
S3, shadow elimination processing is carried out on the image to be detected;
according to the application, the influence of light shadows in the image shot by the camera on the statistics of the number of people is reduced by carrying out shadow elimination treatment on the image to be detected, and the accuracy of the statistics of the number of people is improved.
Preferably, the shadow elimination processing is performed by using conversion of a color space, and the shadow is eliminated by using the difference of the shadow of the person in the image to be detected in the HSV color space.
Specifically, the shadow removal process includes: converting an image to be detected and a background image into an HSV color space from an RGB color space, wherein R is red, G is green, B is blue, H is brightness, S is saturation, and V is tone; shadow elimination is carried out by adopting a formula (1);
in the formula (1), I (x, y) represents a value at (x, y) after the image is binarized, H 1 、S 1 、V 1 Representing the values of the components of the image to be detected in HSV color space, H 2 、S 2 、V 2 Representing the values of components of the background image in HSV color space, H t 、S t 、V t The threshold values of the luminance value, saturation value, and hue value at the time of shadow removal are respectively represented.
It should be noted that the method for converting the RGB color space into the HSV color space for the image to be detected and the background image is not particularly limited, and the method and the principle thereof are known by the skilled person through technical manuals or through routine experimental methods, and are not described herein.
The RGB color space is based on three basic colors of R (Red), G (Green) and B (Blue), and is superimposed in different degrees to generate rich and wide colors, which are called a three-primary color mode.
HSV color space is a relatively common color space, and the HSV color space is used for representing colors to be more accordant with people's habits. The parameters of the color in this model are respectively: brightness (Hue), saturation (Saturation), hue (Value).
S4, preprocessing, communicating area detection, inversion processing and mathematical morphology processing are sequentially carried out on the image to be detected;
wherein the preprocessing comprises the following steps: dividing the image to be detected into an R component image, a G component image and a B component image, and then carrying out binarization operation on each component image to determine a component binarization map for subsequent people counting.
The reversing processing and mathematical morphology processing of the image to be detected comprise the following steps: and respectively inverting the determined component binarization map and the binarization map of the background image, subtracting the two images, performing corrosion treatment, and then performing expansion treatment to obtain a connected region of the image to be detected.
The corrosion treatment is used for removing scattered bright spots on the image; the corrosion treatment may cause "cracking" of the originally connected areas, and therefore, it is necessary to connect these "cracked" areas again by the expansion treatment, avoiding the formation of "islands".
In the present application, the specific methods of the corrosion treatment and the expansion treatment are not particularly limited, and the operation and principle thereof are known by the skilled person through technical manual or through routine experimental methods, and are not described herein.
S5, carrying out connected region analysis on the image to be detected after mathematical morphology processing, and determining a personnel judgment threshold;
the method for analyzing the connected region of the image to be detected after mathematical morphology processing comprises the following steps: and counting pixel values of each connected region.
In this step, the communication area of the sub-area with the calibration reference value determined in step S2 is analyzed.
S6, obtaining the threshold value of each subarea according to the personnel judgment threshold value and the calibration coefficient of each subarea obtained in the step S2;
specifically, the personnel judgment threshold value is multiplied by the calibration coefficient of each subarea obtained in the step S2, and the threshold value of each subarea is obtained.
S7, counting the number of people in each subarea according to the threshold value of each subarea.
Specifically, calculating pixel values of all the connected areas in each sub-area, judging whether the pixel values are larger than a threshold value of the sub-area, and counting the number of people if the pixel values are larger than the threshold value of the sub-area.
According to the application, the number of people in the subarea is calculated by counting the number of connected areas with pixel values larger than the threshold value of the subarea, and the number of people in each subarea is accumulated to obtain the total number of people.
The present application will be described in detail with reference to the following examples, which are only preferred embodiments of the present application and are not limiting thereof.
Example 1
An intelligent energy-saving indoor people counting method based on machine vision comprises the following steps:
s1, acquiring a video of a monitoring area by using a camera; selecting a frame of image including a portrait from the image sequence of the video to serve as an image to be detected, and selecting a frame of image not including the portrait to serve as a background image;
s2, dividing the background image into four sub-areas, namely a sub-area 1, a sub-area 2, a sub-area 3 and a sub-area 4, respectively; office tables with the same size in each subarea; acquiring a pixel value of the length of the office table in each subarea as a calibration measurement value, wherein the pixel value of the length of the office table in subarea 1 is 70, the pixel value of the length of the office table in subarea 2 is 80, the pixel value of the length of the office table in subarea 3 is 60, and the pixel value of the length of the office table in subarea 4 is 50; selecting a pixel value of the length of the office table in the subarea 1 which is closer to the camera as a calibration standard value, namely, the calibration standard value is 70; dividing the calibration measured value by the calibration reference value to obtain the calibration coefficient of each subarea, wherein the calibration coefficient of the subarea 1 is 70/70, the calibration coefficient of the subarea 2 is 80/70, the calibration coefficient of the subarea 3 is 60/70, and the calibration coefficient of the subarea 4 is 50/70;
s3, shadow elimination processing is carried out on the image to be detected: converting an image to be detected and a background image into an HSV color space from an RGB color space, wherein R is red, G is green, B is blue, H is brightness, S is saturation, and V is tone; shadow elimination is carried out by adopting a formula (1);
in the formula (1), I (x, y) represents a value at (x, y) after the image is binarized, H 1 、S 1 、V 1 Representing the values of the components of the image to be detected in HSV color space, H 2 、S 2 、V 2 Representing the values of components of the background image in HSV color space, H t 、S t 、V t The threshold values of the luminance value, saturation value, and hue value at the time of shadow removal are respectively represented.
The office area background image and the image to be detected are shown in fig. 2 and 3, respectively. The change of the histogram information containing the image to be detected in the HSV color space is shown in FIG. 4, H in FIG. 4 1 、S 1 、V 1 Is a histogram representation in HSV color space containing shadow regions, H 2 、S 2 、V 2 Is a histogram representation of the shadow free region in HSV color space. As can be seen from fig. 4, the chrominance values H and the saturation values S remain substantially unchanged, and the luminance value V is significantly reduced in HSV space, whether or not a shadow area of a person is included. Thus, the shadow of the person in the image can be removed by the characteristic of reducing the brightness value.
S4, preprocessing, reversing and morphological processing are sequentially carried out on the image to be detected;
preprocessing the image to be detected comprises the following steps: the image to be detected is divided into an R component image, a G component image, and a B component image as shown in fig. 6, and then binarization operation is performed on each component image as shown in fig. 7 to obtain an R component binarization map, a G component binarization map, and a B component binarization map, and the component binarization map for subsequent population statistics is determined to be a B component binarization map.
The reversing processing and mathematical morphology processing of the image to be detected comprise the following steps: as shown in fig. 8 to 10, the above-determined B component binarized map and the binarized map of the background image are respectively inverted and subtracted, and then the obtained difference binarized map is subjected to corrosion treatment and then expansion treatment, so as to obtain a connected region of the image to be detected, as shown in fig. 11.
S5, carrying out connected region analysis on the image to be detected after morphological processing, and determining a personnel judgment threshold value;
as can be seen from fig. 11, since there are 5 connected regions in the figure, and the connected regions in the sub-region 1 are selected and counted for pixel values, the pixel value in the sub-region is 8012, and therefore, the person determination threshold of the sub-region 1 can be set to 8000. Only if the communication area is larger than 8000 in the sub-area 1, the person is judged; otherwise, the user is not considered to be a person, and statistics are not carried out.
S6, calibrating coefficients of all the subareas obtained in the step S2 are as follows: the calibration coefficient of the subarea 1 is 70/70, the calibration coefficient of the subarea 2 is 80/70, the calibration coefficient of the subarea 3 is 60/70, and the calibration coefficient of the subarea 4 is 50/70; the person determination threshold determined in step S5 is 8000; multiplying the personnel judgment threshold value by the calibration coefficient of each subarea obtained in the step S2, and obtaining the threshold value (rounding) of each subarea is respectively as follows: the threshold value of sub-region 1 is 8000, the fixed threshold value 9143 of sub-region 2, the threshold value 6857 of sub-region 3, and the threshold value 5714 of sub-region 4.
S7, judging whether the pixel value of each communication area in each sub-area is larger than the threshold value of the sub-area, and counting the number of people if the pixel value of each communication area is larger than the threshold value of the sub-area.
As can be seen from fig. 11, there is one connected region in the sub-region 1, the pixel value is 8012, and 1 person is known to exist based on the threshold 8000 of the sub-region 1; two connected areas exist in the subarea 2, the pixel values are 13563 and 200 respectively, and according to the threshold 9143 of the subarea 2, 1 person can be known to exist; no connected region exists in the subregion 3; in the sub-area 4, there are two connected areas, the pixel values are 7282 and 180, respectively, and it is known that 1 person is present based on the threshold 5714 of the sub-area 4 (it is also known from the result of the area 4 that if the person ratio is not calibrated, the person judgment threshold is obtained without dividing the area, and a misjudgment occurs here, and the person judgment is misjudged as no person). Therefore, 3 persons exist in the image to be detected, and statistics of indoor persons can be achieved.
The above examples only show embodiments of the present application, and the description thereof is more specific and detailed, but should not be construed as limiting the scope of the application, but all technical solutions obtained by equivalent substitution or equivalent transformation shall fall within the scope of the application.
Claims (9)
1. The intelligent energy-saving indoor people counting method based on machine vision is characterized by comprising the following steps of:
s1, acquiring an image to be detected and a background image by using a camera; wherein the background image does not include a portrait;
s2, dividing the background image into a plurality of sub-areas, wherein the sub-areas are provided with reference objects with the same size; acquiring a pixel value of the length of a reference object in each subarea as a calibration measurement value; selecting a pixel value of the length of a reference object in a subarea which is closer to the camera as a calibration standard value; dividing the calibration measured value by the calibration reference value to obtain the calibration coefficient of each subarea;
s3, shadow elimination processing is carried out on the image to be detected;
s4, preprocessing, reversing and mathematical morphology processing are sequentially carried out on the image to be detected;
s5, carrying out connected region analysis on the image to be detected after mathematical morphology processing, and determining a personnel judgment threshold;
s6, obtaining the threshold value of each subarea according to the personnel judgment threshold value and the calibration coefficient of each subarea obtained in the step S2;
s7, counting the number of people in each subarea according to the threshold value of each subarea.
2. The machine vision-based intelligent energy-saving indoor people counting method according to claim 1, wherein the step of obtaining the image to be detected and the background image by using the camera comprises the following steps: acquiring a video of a monitoring area by using a camera; and selecting a frame of images including the portrait from the image sequence of the video to serve as the images to be detected, and selecting a frame of images not including the portrait to serve as the background images.
3. The machine vision-based intelligent energy-saving indoor people counting method according to claim 1, wherein the reference object is furniture, and the background image is divided into four sub-areas.
4. The machine vision-based intelligent energy-saving indoor people counting method according to claim 1, wherein the shadow elimination processing is performed by using the conversion of the color space in step S3.
5. The machine vision based intelligent energy saving indoor people counting method according to claim 4, wherein the shadow eliminating process comprises: converting an image to be detected and a background image into an HSV color space from an RGB color space, wherein R is red, G is green, B is blue, H is brightness, S is saturation, and V is tone; shadow elimination is carried out by adopting a formula (1);
(1)
in the formula (1), I (x, y) represents a value at (x, y) after the image is binarized, H 1 、S 1 、V 1 Representing the values of the components of the image to be detected in HSV color space, H 2 、S 2 、V 2 The values of the components representing the background image in the HSV color space, ht, st, vt represent threshold values of the luminance value, saturation value, and hue value, respectively, when shading is performed.
6. The machine vision-based intelligent energy-saving indoor people counting method according to claim 1, wherein preprocessing the image to be detected comprises: dividing the image to be detected into an R component image, a G component image and a B component image, and then carrying out binarization operation on each component image to determine a component binarization map for subsequent people counting.
7. The machine vision-based intelligent energy-saving indoor people counting method according to claim 6, wherein the inverting and mathematical morphology processing of the image to be detected comprises: and respectively inverting the determined component binarization map and the binarization map of the background image, subtracting the two images, performing corrosion treatment, and then performing expansion treatment to obtain a connected region of the image to be detected.
8. The machine vision-based intelligent energy-saving indoor people counting method according to claim 1, wherein the step S5 of analyzing the connected region of the morphologically processed image to be detected comprises: and counting pixel values of each connected region.
9. The intelligent energy-saving indoor people counting method based on machine vision according to claim 1, wherein in step S6, the person judgment threshold is multiplied by the calibration coefficient of each subarea obtained in step S2 to obtain the threshold of each subarea; in step S7, it is determined whether the pixel value of each communication area in each sub-area is greater than the threshold value of the sub-area, and if so, the number of people is counted.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010023087.XA CN111192263B (en) | 2020-01-09 | 2020-01-09 | Intelligent energy-saving indoor people counting method based on machine vision |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010023087.XA CN111192263B (en) | 2020-01-09 | 2020-01-09 | Intelligent energy-saving indoor people counting method based on machine vision |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111192263A CN111192263A (en) | 2020-05-22 |
CN111192263B true CN111192263B (en) | 2023-08-22 |
Family
ID=70710771
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010023087.XA Active CN111192263B (en) | 2020-01-09 | 2020-01-09 | Intelligent energy-saving indoor people counting method based on machine vision |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111192263B (en) |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101188743A (en) * | 2007-09-17 | 2008-05-28 | 深圳先进技术研究院 | A video-based intelligent counting system and its processing method |
CN101236606A (en) * | 2008-03-07 | 2008-08-06 | 北京中星微电子有限公司 | Shadow cancelling method and system in vision frequency monitoring |
CN102622763A (en) * | 2012-02-21 | 2012-08-01 | 芮挺 | Method for detecting and eliminating shadow |
CN109186584A (en) * | 2018-07-18 | 2019-01-11 | 浙江臻万科技有限公司 | A kind of indoor orientation method and positioning system based on recognition of face |
CN110428439A (en) * | 2019-07-18 | 2019-11-08 | 浙江树人学院(浙江树人大学) | A kind of shadow detection method based on shadow region color saturation property |
CN110503017A (en) * | 2019-08-12 | 2019-11-26 | 北京交通大学 | Intelligent and energy-saving indoor people detection system and method based on image processing |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2006105655A1 (en) * | 2005-04-06 | 2006-10-12 | March Networks Corporation | Method and system for counting moving objects in a digital video stream |
-
2020
- 2020-01-09 CN CN202010023087.XA patent/CN111192263B/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101188743A (en) * | 2007-09-17 | 2008-05-28 | 深圳先进技术研究院 | A video-based intelligent counting system and its processing method |
CN101236606A (en) * | 2008-03-07 | 2008-08-06 | 北京中星微电子有限公司 | Shadow cancelling method and system in vision frequency monitoring |
CN102622763A (en) * | 2012-02-21 | 2012-08-01 | 芮挺 | Method for detecting and eliminating shadow |
CN109186584A (en) * | 2018-07-18 | 2019-01-11 | 浙江臻万科技有限公司 | A kind of indoor orientation method and positioning system based on recognition of face |
CN110428439A (en) * | 2019-07-18 | 2019-11-08 | 浙江树人学院(浙江树人大学) | A kind of shadow detection method based on shadow region color saturation property |
CN110503017A (en) * | 2019-08-12 | 2019-11-26 | 北京交通大学 | Intelligent and energy-saving indoor people detection system and method based on image processing |
Also Published As
Publication number | Publication date |
---|---|
CN111192263A (en) | 2020-05-22 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US6594384B1 (en) | Apparatus and method for estimating and converting illuminant chromaticity using perceived illumination and highlight | |
US6668078B1 (en) | System and method for segmentation of images of objects that are occluded by a semi-transparent material | |
JP2000171304A (en) | Detecting method of color illumination color temperature and device therefor | |
CN109141640A (en) | Acetes chinensis method, system, equipment and storage medium based on machine vision | |
CN111626304B (en) | Color feature extraction method based on machine vision and application thereof | |
TW201019268A (en) | Method for detecting shadow of object | |
EP1118978A2 (en) | Method and apparatus for transforming color image into monochromatic image | |
CN111192332B (en) | Smoke detection-based smoke machine control method and smoke machine | |
CN111192263B (en) | Intelligent energy-saving indoor people counting method based on machine vision | |
Schildkraut et al. | A fully automatic redeye detection and correction algorithm | |
Barnard et al. | Colour constancy for scenes with varying illumination | |
CN110319933A (en) | A kind of light source light spectrum optimization method based on CAM02-UCS colored quantum noise | |
Andreadis et al. | Image pixel classification by chromaticity analysis | |
CN101739678B (en) | Object shadow detection method | |
KR100230446B1 (en) | Determination method for color of light from color image | |
JPH11304589A (en) | Ambient light measuring system | |
JP3461143B2 (en) | Color image target position detection device | |
JP2707743B2 (en) | Automatic evaluation method of degree of change of surface texture | |
KR100695129B1 (en) | Apparatus and method for illuminating color of color image | |
JPH03259734A (en) | Automatic evaluating method for variation degree of surface property | |
CN113409238A (en) | Circular indicator lamp positioning and state recognition method for power distribution cabinet | |
EP4478291A1 (en) | Method for determining a colour of a tracked object | |
KR100275776B1 (en) | A method for estimating illuminant color using light locus for camera and highlight on the image | |
CN119851263A (en) | Evaluation method of food cooking effect | |
JP2580080Y2 (en) | Hue discriminator |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |