Detailed Description
Exemplary embodiments of the present disclosure are described below with reference to the accompanying drawings, in which various details of the embodiments of the disclosure are included to assist understanding, and which are to be considered as merely exemplary. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present disclosure. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
Currently, a user needs to go to a brick and mortar store for optometry and purchase vision correction goods (such as glasses) based on the optometry information. Although the online shopping mall has a large number of vision correction products, the online shopping mall cannot try on the vision correction products and cannot acquire optometry information of the user, and therefore, the user cannot complete purchasing, selecting and trying on the vision correction products (including myopia glasses, hypermetropia glasses, sunglasses and the like) online.
In view of the fact that the popularization rate of electronic devices is higher and higher, in the present disclosure, the hardware (such as a distance sensor, a camera, and the like) based on the electronic devices can realize intelligent optometry, and the software can develop functions such as fitting and selecting in real time, so that vision correction products can be recommended and purchased on line through the electronic devices.
A recommendation method, apparatus, electronic device, and medium for electronic commerce according to an embodiment of the present disclosure are described below with reference to the accompanying drawings.
Fig. 1 is a flowchart illustrating a recommendation method for electronic commerce according to an embodiment of the disclosure.
The recommendation method for electronic commerce is exemplified by being configured in a recommendation device for electronic commerce, which can be applied to any electronic equipment so that the electronic equipment can execute a recommendation function for electronic commerce.
The electronic device may be any device with computing capability, for example, a Personal Computer (PC), a mobile terminal, a server, and the like, and the mobile terminal may be a hardware device with various operating systems, touch screens, and/or display screens, such as an in-vehicle device, a mobile phone, a tablet Computer, a Personal digital assistant, and a wearable device.
As shown in fig. 1, the recommendation method for electronic commerce may include the steps of:
step 101, obtaining a distance and a test type between a target object and an electronic device.
In embodiments of the present disclosure, the test types may include, but are not limited to, a myopia test, a hyperopia test, and the like. The test type may be determined by a user, for example, taking an example that the test type includes a near vision test and a far vision test, the display interface of the electronic device may include two controls, one of the controls is used for testing near vision power, the other control is used for testing far vision power, and the user may select a desired test type by triggering the controls.
As an example, two controls of a far vision test and a near vision test are included on the display interface, and if the user knows that the user is near, but does not determine the near vision degree, the user can test the near vision degree of the user by touching the control of the near vision test.
In the disclosed embodiment, the target subject may be a person, an animal, or the like, who requires optometry.
In the embodiment of the present disclosure, the distance between the target object and the electronic device may be acquired.
As an example, the distance between the target object and the electronic device may be measured by a distance sensor in the electronic device.
As another example, the distance between the target object and the electronic device may be measured by a depth sensor (TOF (Time Of Flight) sensor, a structured light sensor, or the like) Of the electronic device.
102, inquiring and displaying a vision test image matched with the test type and the distance; wherein the vision test image is used to test the vision of the target subject.
In the embodiment of the present disclosure, the vision test image may include a test pattern, for example, the test pattern may be "E" indicating different directions, or the test pattern may include patterns such as flowers, grass, scissors, umbrellas, and the like.
It should be noted that, in order to improve the accuracy and reliability of the vision test result, when the distances between different objects and the electronic device are different, the vision test images displayed by the electronic device may be different. For example, when the distance between the subject and the electronic device is relatively close, the test pattern in the vision test image may be relatively small, and when the distance between the subject and the electronic device is relatively far, the test pattern in the vision test image may be relatively large.
In the embodiment of the disclosure, the corresponding relations between different distances, test types and vision test images may be preset, so that in the disclosure, the corresponding relations may be queried based on the distance and the test type between the target object and the electronic device to determine the vision test image matched with the distance and the test type, and the vision test image may be displayed through a display interface of the electronic device.
And 103, acquiring feedback information of the target object on the vision test image, and determining vision test parameters of the target object according to the feedback information.
The vision testing parameters may include, but are not limited to, refractive power, interpupillary distance, and the like.
In the embodiment of the disclosure, in the process of displaying the vision test image on the display interface of the electronic device, feedback information of the target object on the vision test image can be acquired, and the vision test parameter of the target object is determined according to the feedback information.
And step 104, inquiring and displaying the recommended vision correction product matched with the vision test parameters and the test types according to the vision test parameters and the test types.
In the embodiment of the disclosure, the recommended vision correction product matched with the vision test parameter and the test type can be inquired according to the vision test parameter and the test type, and the recommended vision correction product is displayed through the display interface of the electronic device. The number of the recommended vision correction products may be one or more, and the disclosure does not limit this.
As an example, when the test type is a myopia test, the refractive power in the vision test parameter may be a myopia power, a recommended vision correction appliance matching the myopia power of the target object may be queried, and the recommended vision correction appliance may be displayed through a display interface of the electronic device, so that the user may select and purchase the recommended vision correction appliance matching the myopia power of the user.
As another example, when the test type is a far vision test, the refractive power in the vision test parameters may be a far vision power, and a recommended vision correction appliance matching the far vision power of the target subject may be queried and presented through a display interface of the electronic device, so that the user selects and purchases the recommended vision correction appliance matching his own far vision power.
It should be noted that with the rapid development of internet technology and live internet technology, the consumption scene gradually becomes online, and live tape and goods becomes a new online marketing mode. By live broadcast of the goods, the permeability of the goods is gradually increased, and more anchor broadcasters and stars are added into the industry of live broadcast of the goods. Therefore, as an application scenario, the recommendation method for electronic commerce of the present disclosure may be applied to a live broadcast room with goods, when a vision correction product merchant sells a vision correction product with goods in a live broadcast room, a user may perform optometry through an electronic device, and purchase the vision correction product from the live broadcast room based on an optometry result, on one hand, because an online flow rate is large, a customer source problem may be solved, on the other hand, the vision correction product merchant may save a huge amount of store-front expenses, on the other hand, under the condition that the vision correction product merchant saves more costs and a larger customer flow rate, the vision correction product merchant may reduce a unit price of the vision correction product, so that more users may be attracted to purchase the vision correction product.
The recommendation method for electronic commerce of the embodiment of the disclosure is implemented by obtaining a test type and a distance between a target object and an electronic device; inquiring and displaying a vision test image matched with the test type and the distance; wherein the vision test image is used for testing the vision of the target object; acquiring feedback information of the target object on the vision test image, and determining vision test parameters of the target object according to the feedback information; and inquiring and displaying the recommended vision correction product matched with the vision test parameters and the test types according to the vision test parameters and the test types. From this, can realize carrying out the optometry for the user through the higher electronic equipment of prevalence to based on the optometry result, recommend the vision correction articles for use that match with the optometry result to the user, thereby the user can purchase the vision correction articles for use on line, and need not to purchase the vision correction articles for use through entity shop optometry under the user line, can reduce the degree of difficulty that the user purchased the vision correction articles for use.
In the technical scheme of the present disclosure, the processes of collecting, storing, using, processing, transmitting, providing, disclosing and the like of the personal information of the related user are all performed under the premise of obtaining the consent of the user, and all meet the regulations of the related laws and regulations, and do not violate the good custom of the public order.
In order to clearly illustrate how the above embodiments determine the vision test parameters of the target object according to the feedback information of the target object to the vision test image, the present disclosure also proposes a recommendation method for electronic commerce.
Fig. 2 is a flowchart illustrating a recommendation method for electronic commerce according to a second embodiment of the disclosure.
As shown in fig. 2, the recommendation method for electronic commerce may include the steps of:
step 201, obtaining the distance between the target object and the electronic device and the test type.
Step 202, inquiring and displaying a vision test image matched with the test type and the distance; wherein the vision test image is used for testing the vision of the target object.
For the explanation of steps 201 to 203, reference may be made to the related description in any embodiment of the present disclosure, which is not described herein again.
Step 203, obtaining feedback information of the target object to the vision test image, wherein the vision test image comprises a test pattern, and the feedback information comprises voice information.
In the embodiment of the present disclosure, in the process of displaying the vision test image on the display interface of the electronic device, a target object may be monitored by a sound pickup device (for example, a sound sensor) of the electronic device, so as to obtain voice information (or referred to as audio information and voice data).
As an example, when the vision test image is displayed on the display interface of the electronic device, the user may be prompted to speak a test pattern in the vision test image, for example, the test pattern is an apple, and the user may speak an "apple", or the like. Therefore, a sound pickup device (such as a sound sensor) in the electronic equipment performs voice collection on the target object, and voice information can be obtained.
And step 204, performing voice recognition on the voice information to obtain text information.
In the embodiment of the present disclosure, speech recognition may be performed on speech information based on a speech recognition technology to obtain text information.
Step 205, matching the text information with semantic information associated with the test pattern.
In the disclosed embodiments, each test pattern has associated semantic information, and still as exemplified by the above example, when the test pattern is an apple, the test pattern associated semantic information may include at least one language expression of the apple, such as apple, and the like.
And step 206, determining vision test parameters of the target object according to the matching result of the text information and the semantic information.
The explanation of the vision testing parameters in the foregoing embodiment is also applicable to this embodiment, and is not repeated here.
In the embodiment of the present disclosure, the text information may be matched with semantic information associated with the test pattern, so as to determine the vision test parameter of the target object according to a matching result of the text information and the semantic information.
As an example, each vision test image may have a corresponding labeled vision parameter, i.e., each vision test image may be labeled with a corresponding vision parameter (such as diopter), and in the present disclosure, when the text information does not match the semantic information associated with the vision test image, the vision test parameter of the target object may be determined according to the corresponding labeled vision parameter of the vision test image.
In a possible implementation manner of the embodiment of the present disclosure, when the number of the vision test images is multiple, the determination manner of the vision test parameters may be: determining a first vision test image from the plurality of vision test images, for example, when the plurality of vision test images are displayed according to the corresponding labeled vision parameters in descending order, text information and semantic information corresponding to the first vision test image are not matched, and text information and semantic information corresponding to each vision test image displayed before the target vision test image are matched, for example, when the plurality of vision test images are displayed according to the corresponding labeled vision parameters in descending order, text information and semantic information corresponding to the first vision test image are matched, and text information and semantic information corresponding to each vision test image displayed before the target vision test image are not matched. Thus, in the present disclosure, the vision test parameters of the target object may be determined according to the labeled vision parameters corresponding to the first vision test image.
As an example, the labeled vision parameter corresponding to the first vision test image may be used as the vision test parameter of the target object.
As another example, the vision test parameters of the target object may be determined according to the annotated vision parameters corresponding to the first vision test image and the annotated vision parameters corresponding to a frame of vision test image presented before the first vision test image.
For example, if the first vision test image is the displayed nth vision test image, the vision test parameters of the target object may be determined according to the labeled vision parameters corresponding to the (n-1) th vision test image and the labeled vision parameters corresponding to the nth vision test image. For example, the mean of two annotated vision parameters (e.g., diopters) can be used as the vision test parameter (e.g., diopters) of the target subject.
Therefore, the vision test is carried out on the target object according to the plurality of vision test images, the accuracy and the reliability of the test result can be improved, and the accuracy and the reliability of the vision test parameter determination result can be determined.
And step 207, inquiring and displaying the recommended vision correction product matched with the vision test parameters and the test types according to the vision test parameters and the test types.
For the explanation of step 207, reference may be made to the related description in any embodiment of the present disclosure, which is not described herein again.
The recommendation method for electronic commerce according to the embodiment of the disclosure can determine the vision test parameters of the target object according to the voice information obtained by monitoring the target object by the sound sensor in the electronic device, and can improve the effectiveness and accuracy of the determination of the vision test parameters.
In order to clearly illustrate how the vision test parameters of the target object are determined according to the feedback information of the target object to the vision test image in any embodiment of the disclosure, the disclosure also provides a recommendation method for electronic commerce.
Fig. 3 is a flowchart illustrating a recommendation method for electronic commerce according to a third embodiment of the disclosure.
As shown in fig. 3, the recommendation method for electronic commerce may include the steps of:
step 301, obtaining a distance and a test type between the target object and the electronic device.
Step 302, inquiring and displaying vision test images matched with the test types and distances; wherein the vision test image is used for testing the vision of the target object.
For the explanation of steps 301 to 302, reference may be made to the related description in any embodiment of the present disclosure, which is not described herein again.
Step 303, obtaining feedback information of the target object to the vision test image, where the vision test image includes the test pattern, and the feedback information includes the first image information.
In the embodiment of the disclosure, in the process of displaying the vision test image on the display interface of the electronic device, the target object may be monitored by an image sensor (such as a front camera) in the electronic device to obtain the first image information.
As an example, when the display interface of the electronic device presents the vision test image, the user may be prompted to indicate the orientation of the test pattern in the vision test image, e.g., the test pattern may be "E" for indicating a different orientation. For example, the test pattern may indicate an orientation of "up" when the test pattern is as shown in fig. 4 (a), an orientation of "down" when the test pattern is as shown in fig. 4 (b), an orientation of "left" when the test pattern is as shown in fig. 4 (c), and an orientation of "right" when the test pattern is as shown in fig. 4 (d).
In the present disclosure, the target object may indicate an orientation of the test pattern in the vision test image by a finger, and image-capture the target object by an image sensor in the electronic device to obtain the first image information.
Step 304, performing finger orientation recognition on the first image information to determine a target orientation of the target object finger.
In the embodiment of the present disclosure, the first image information may be subjected to finger orientation recognition based on an image recognition technique to determine a target orientation of the target object finger.
As an example, finger orientation recognition may be performed on the first image information based on a deep learning technique to determine a target orientation of the target object finger. For example, the first image information may be finger-oriented using a trained finger-orientation recognition model to determine a target orientation of the target object finger.
The training mode of the finger orientation recognition model can be as follows: the method comprises the steps of obtaining a sample image, marking a first finger orientation on the sample image, carrying out finger orientation recognition on the sample image by adopting an initial finger orientation recognition model to obtain a second finger orientation, and training the initial finger orientation recognition model according to the difference between the first finger orientation and the second finger orientation to obtain a trained finger orientation recognition model.
For example, a loss value may be generated based on a difference between the first finger orientation and the second finger orientation, wherein the loss value is in a positive relationship (i.e., a positive correlation) with the difference, i.e., the loss value is smaller when the difference is smaller, and the loss value is larger when the difference is larger. The model parameters in the initial finger orientation recognition model can thus be adjusted according to the loss value to minimize the loss value.
It should be noted that, the above example is performed by only taking the termination condition of the model training as the minimization of the loss value, and when the method is actually applied, other termination conditions may also be set, for example, the number of times of training reaches the set number, the length of time of training reaches the set length, the loss value converges, and the like, which is not limited by the disclosure.
Step 305, matching the target orientation to a reference orientation associated with the test pattern.
In embodiments of the present disclosure, each test pattern has an associated reference orientation, which again is exemplified by the above example, which may be "up" or "up" when the test pattern is as shown in fig. (a), may be "down" or "down" when the test pattern is as shown in fig. 4 (b), may be "left" or "left" when the test pattern is as shown in fig. 4 (c), and may be "right" or "right" when the test pattern is as shown in fig. 4 (d).
And step 306, determining vision test parameters of the target object according to the matching result of the target orientation and the reference orientation.
In an embodiment of the present disclosure, the target orientation may be matched with a reference orientation associated with the vision test image to determine the vision test parameters of the target subject according to a result of matching the target orientation with the reference orientation.
As an example, each vision test image may have a corresponding labeled vision correction article, i.e., each vision test image may have a corresponding vision parameter (such as diopter number) labeled thereon.
In a possible implementation manner of the embodiment of the present disclosure, when the number of the vision test images is multiple, the determination manner of the vision test parameters may be: determining a second vision test image from the plurality of vision test images, for example, when the plurality of vision test images are displayed according to the corresponding labeling vision parameters in the descending order, the target orientation and the reference orientation corresponding to the second vision test image are not matched, and the target orientation and the reference orientation corresponding to each vision test image displayed before the second vision test image are matched, for example, when the plurality of vision test images are displayed according to the corresponding labeling vision parameters in the descending order, the target orientation and the reference orientation corresponding to the second vision test image are matched, and the target orientation and the reference orientation corresponding to each vision test image displayed before the second vision test image are not matched. Thus, in the present disclosure, the vision test parameters of the target object may be determined according to the labeled vision parameters corresponding to the second vision test image.
As an example, the labeled vision parameter corresponding to the second vision test image can be used as the vision test parameter of the target object.
As another example, the vision test parameters of the target object may be determined according to the annotated vision parameters corresponding to the second vision test image and the annotated vision parameters corresponding to a frame of vision test image displayed before the second vision test image.
For example, if the second vision test image is the mth frame of vision test image, the vision test parameters of the target object may be determined according to the labeled vision parameters corresponding to the (m-1) th frame of vision test image and the labeled vision parameters corresponding to the mth frame of vision test image. For example, the mean of the two annotated vision parameters may be used as the vision test parameter of the target subject.
Therefore, the vision test is carried out on the target object according to the plurality of vision test images, the accuracy and the reliability of the test result can be improved, and the accuracy and the reliability of the vision test parameter determination result can be determined.
And 307, inquiring and displaying recommended vision correction products matched with the vision test parameters and the test types according to the vision test parameters and the test types.
For the explanation of step 307, reference may be made to the related description in any embodiment of the present disclosure, and details are not repeated herein.
The recommendation method for electronic commerce according to the embodiment of the disclosure can determine the vision test parameters of the target object according to the image information obtained by monitoring the target object by the image sensor in the electronic device, and can improve the effectiveness and accuracy of the determination of the vision test parameters.
In order to clearly illustrate how the vision test parameters of the target object are determined according to the feedback information of the target object to the vision test image in the above embodiments, the present disclosure also provides a recommendation method for electronic commerce.
Fig. 5 is a flowchart illustrating a recommendation method for electronic commerce according to a fourth embodiment of the disclosure.
As shown in fig. 5, the recommendation method for electronic commerce may include the steps of:
step 501, obtaining the distance between the target object and the electronic device and the test type.
502, inquiring and displaying a vision test image matched with the test type and the distance; wherein the vision test image is used to test the vision of the target subject.
For the explanation of steps 501 to 502, reference may be made to the related description in any embodiment of the present disclosure, which is not described herein again.
Step 503, obtaining feedback information of the target object to the vision test image, wherein the vision test image includes the test pattern, and the feedback information includes touch information.
In the embodiment of the disclosure, in the process of displaying the vision test image on the display interface of the electronic device, the touch operation of the control on the display interface of the electronic device triggered by the target object may be monitored to obtain the touch information.
As an example, when the display interface of the electronic device presents the vision test image, the user may be prompted to indicate the orientation of the test pattern in the vision test image, e.g., the test pattern may be "E" for indicating a different orientation.
For example, as illustrated in fig. 6, the test pattern may include four controls "up", "down", "left", and "right" on a display interface of the electronic device for indicating an orientation of the test pattern. The target object can determine the direction indicated by the test pattern by touching the control, and can monitor the touch operation of the control on the display interface of the electronic device triggered by the target object to obtain touch information.
And step 504, determining a target control triggered by the target object according to the touch information.
In the embodiment of the present disclosure, the touch information may be parsed to determine a target control triggered by the target object.
And 505, matching the target control with a reference control associated with the vision test image.
In the embodiments of the present disclosure, each test pattern has an associated reference control, which may be a control corresponding to "left", still exemplified as shown in fig. 6 with the test pattern.
And step 506, determining vision test parameters of the target object according to the matching result of the target control and the reference control.
In the embodiment of the disclosure, the target control may be matched with a reference control associated with the vision test image, so as to determine the vision test parameter of the target object according to a matching result of the target control and the reference control.
As an example, each vision test image may have a corresponding labeled vision parameter, i.e., each vision test image may have a corresponding vision parameter (such as diopter number) labeled thereon.
In a possible implementation manner of the embodiment of the present disclosure, when the number of the vision test images is multiple, the determination manner of the vision test parameters may be: determining a third vision test image from the plurality of vision test images, for example, when the plurality of vision test images are displayed according to the corresponding labeled vision parameters in descending order, the target control and the reference control corresponding to the third vision test image are not matched, and the target control and the reference control corresponding to each vision test image displayed before the third vision test image are matched, for example, when the plurality of vision test images are displayed according to the corresponding labeled vision parameters in descending order, the target control and the reference control corresponding to the third vision test image are matched, and the target control and the reference control corresponding to each vision test image displayed before the third vision test image are not matched. Thus, in the present disclosure, the vision test parameters of the target object may be determined according to the labeled vision parameters corresponding to the third vision test image.
As an example, the labeled vision parameter corresponding to the third vision test image can be used as the vision test parameter of the target object.
As another example, the vision test parameters of the target object may be determined according to the annotated vision parameters corresponding to the third vision test image and the annotated vision parameters corresponding to a frame of vision test image presented before the third vision test image.
For example, if the third vision test image is the displayed s-th vision test image, the vision test parameters of the target object can be determined according to the labeled vision parameters corresponding to the s-1 th vision test image and the labeled vision parameters corresponding to the s-th vision test image. For example, the mean of the two annotated vision parameters may be used as the vision test parameter of the target subject.
Therefore, the vision test is carried out on the target object according to the plurality of vision test images, the accuracy and the reliability of the test result can be improved, and the accuracy and the reliability of the vision test parameter determination result can be determined.
And step 507, inquiring and displaying the recommended vision correction product matched with the vision test parameters and the test types according to the vision test parameters and the test types.
For the explanation of step 507, reference may be made to relevant descriptions in any embodiment of the present disclosure, which are not described herein again.
According to the recommendation method for electronic commerce, the vision test parameters of the target object can be determined according to the touch information obtained by monitoring the touch operation of the control on the display interface of the electronic equipment triggered by the target object, and the effectiveness and accuracy of the determination of the vision test parameters can be improved.
In order to clearly illustrate how to inquire about and display recommended vision correction products matched with the vision test parameters and the test types according to the vision test parameters and the test types in any embodiment of the disclosure, the disclosure also provides a recommendation method for electronic commerce.
Fig. 7 is a flowchart illustrating a recommendation method for electronic commerce according to a fifth embodiment of the disclosure.
As shown in fig. 7, the recommendation method for electronic commerce may include the steps of:
step 701, obtaining a distance and a test type between a target object and an electronic device.
Step 702, inquiring and displaying a vision test image matched with the test type and the distance; wherein the vision test image is used for testing the vision of the target object.
Step 703, obtaining feedback information of the target object on the vision test image, and determining vision test parameters of the target object according to the feedback information.
For the explanation of steps 701 to 703, reference may be made to the related description in any embodiment of the present disclosure, which is not described herein again.
At step 704, at least one recommended vision correction item that matches the vision test parameters and test type is determined from the plurality of candidate vision correction items.
In the embodiment of the present disclosure, the candidate vision correction product may be, for example, a vision correction product in a commodity pool, or may also be a vision correction product popularized in a live broadcast room, and the like, which is not limited by the present disclosure.
In embodiments of the present disclosure, at least one recommended vision correction item matching the vision test parameters and the test type may be determined from a plurality of candidate vision correction items.
As an example, when the test type is a myopia test, the refractive power in the vision test parameters may be a myopia power, and recommended vision correction products that match the myopia power of the target subject may be queried.
As another example, when the test type is a distance vision test, the refractive power in the vision test parameters may be distance vision power, and a recommended vision correction article matching the distance vision power of the target subject may be queried.
Step 705, displaying at least one recommended vision correction product.
In the disclosed embodiment, the recommended vision correction appliance may be presented through a display interface of the electronic device so that the target subject may select and purchase the recommended vision correction appliance matching the own vision test parameters and test types.
In the case where the recommended vision correction product is recommended glasses, the above example is only performed by selecting glasses to be recommended from complete glasses, and in the case of actual application, the user may select a frame or a lens that the user prefers, and generate recommended glasses based on the frame or the lens selected by the user.
As a possible implementation manner, when the recommended vision correction product is recommended glasses, a plurality of candidate frames may be presented through a display interface of the electronic device, and in response to a selection operation on a first target frame of the plurality of candidate frames, at least one first target lens matching with the vision test parameter and the test type is determined from a plurality of candidate lenses associated with the first target frame, so that at least one recommended glasses may be determined according to the first target frame and the at least one first target lens, and each recommended glasses may be presented.
That is, the target subject may select a favorite frame (referred to as a first target frame in the present disclosure) from a plurality of candidate frames, and acquire lenses (referred to as candidate lenses in the present disclosure) associated with the first target frame, so that a lens (referred to as a first target lens in the present disclosure) matching the vision test parameters and the test type of the target subject may be selected from the respective candidate lenses, and the respective recommended eyeglasses may be determined based on the first target frame and the respective first target lenses.
As another possible implementation manner, when the recommended vision correction product is recommended glasses, a plurality of candidate lenses matching with the vision test parameters and types of the target subject may be displayed through a display interface of the electronic device, and in response to a selection operation on a second target lens in the plurality of candidate lenses, at least one second target frame associated with the second target lens is obtained, so that at least one recommended glasses may be determined according to the at least one second target frame and the second target lens, and each recommended glasses may be displayed.
That is, lenses (referred to as candidate lenses in the present disclosure) matching the vision test parameters and the test type of the target subject may be determined from the respective lenses, so that the target subject may select a lens (referred to as a second target lens in the present disclosure) preferred by the target subject from the plurality of candidate lenses, acquire a frame associated with the second target lens (referred to as a second target frame in the present disclosure), and may determine the respective recommended glasses according to the respective second target frames and the second target lenses.
Therefore, the vision correction product matched with the vision test parameters and the test types of the target object can be recommended to the target object in multiple modes, and the flexibility and the applicability of the method can be improved.
According to the recommendation method for electronic commerce, which is disclosed by the embodiment of the disclosure, the pertinence and the accuracy of recommendation of the vision correction product can be realized by recommending the vision correction product matched with the vision test parameter and the test type of the target object.
In order to clearly illustrate how to inquire and display recommended vision correction products matched with the vision test parameters and the test types according to the vision test parameters and the test types in any embodiment of the disclosure, the disclosure also provides a recommendation method for electronic commerce.
Fig. 8 is a flowchart illustrating a recommendation method for electronic commerce according to a sixth embodiment of the disclosure.
As shown in fig. 8, the recommendation method for electronic commerce may include the steps of:
step 801, obtaining the distance between the target object and the electronic device and the test type.
Step 802, inquiring and displaying vision test images matched with the test types and distances; wherein the vision test image is used for testing the vision of the target object.
And 803, acquiring feedback information of the target object on the vision test image, and determining vision test parameters of the target object according to the feedback information.
For the explanation of steps 801 to 803, reference may be made to the related description in any embodiment of the present disclosure, and details are not repeated herein.
And step 804, performing pupil distance detection on the third image information in the feedback information to obtain a reference pupil distance of the target object in the third image information.
In the embodiment of the present disclosure, the feedback information may further include third image information, where the third image information is obtained by monitoring the target object through an image sensor in the electronic device.
In the embodiment of the present disclosure, pupil detection or pupil distance detection may be performed on the third image information to obtain a reference pupil distance of the target object in the third image information, that is, the reference pupil distance is the pupil distance of the target object in the image.
And step 805, determining the actual pupil distance of the target object according to the reference pupil distance and the distance.
In the embodiment of the present disclosure, the actual pupil distance of the target object may be determined according to the reference pupil distance and the distance between the target object and the electronic device.
It should be noted that, when the distance or the orientation between the target object and the image sensor is different, the position of the pupil distance of the target object in the image is different due to the perspective effect. Therefore, in the present disclosure, the actual interpupillary distance of the target object can be determined from the reference interpupillary distance and the distance based on the perspective theory.
As an example, actual pupil distance = reference pupil distance — distance between target object and electronic device/distance of negative plate from lens.
Step 806, querying at least one recommended vision correction article matched with the test type, the vision test parameters and the actual pupil distance.
In embodiments of the present disclosure, at least one recommended vision correction item that matches the test type, vision test parameters, and actual pupillary distance may be queried.
As an example, when the test type is a myopia test, the refractive power in the vision test parameters may be a myopic power, and recommended vision correction goods matching the myopic power and the actual pupillary distance of the target subject may be queried.
As another example, when the test type is a distance vision test, the diopter number in the vision test parameters may be distance vision power, and recommended vision correction appliances matching the distance vision power and the actual pupil distance of the target subject may be queried.
At least one recommended vision correction item is displayed 807.
In the disclosed embodiment, the recommended vision correction product may be presented through a display interface of the electronic device so that the target subject selects and purchases the recommended vision correction product matching the own vision test parameters and the actual pupil distance.
In any embodiment of the disclosure, the face image of the target object may be rendered according to the recommended vision correction product, so as to present an effect of trying on the recommended vision correction product for the target object.
As an example, a face image of a target object may be captured by an image sensor (e.g., a front camera) in an electronic device, and a target image obtained by rendering the face image may be obtained, where the target image is obtained by rendering the face image according to a recommended vision correction product, so that the target image may be displayed.
Therefore, the effect of trying on recommended vision correction articles can be presented for the target object, so that the vision correction articles can be tried on line, and the individual wearing requirements of users are met.
In any embodiment of the present disclosure, the recommended vision correction product may be displayed in a manner of: in response to the target object triggering operation on the setting control, a vision correction product list is displayed, wherein the vision correction product list may include target product information of at least one recommended vision correction product, and the target product information may include at least an access link (such as a purchase link) of the recommended vision correction product, and may further include information such as a brand name, a commodity name, and a price of the recommended vision correction product.
The setting control is a control for displaying information of the vision correction product, for example, the recommendation method for electronic commerce is applied to a spot with goods in a live broadcast room for example, and the setting control may be a "shopping cart" control or a control newly added in the live broadcast room. The vision correction appliance information may include information such as a brand name, a product name, a price, an access link (e.g., a purchase link), etc.
The vision correction product list may be presented in a static mode, may fly in a dynamic mode, or may suspend on the face image or the target image of the target object, which is not limited in this disclosure.
As an example, the vision correction product is taken as glasses, and the vision correction product list may be a glasses list, for example, the glasses list may be as shown in an area 91 in fig. 9, and the glasses list includes target product information of a plurality of recommended glasses.
From this, the target object can visit and recommend vision correction articles for use according to the target articles for use information in the vision correction articles for use list, can satisfy the user to recommending the learning demand of vision correction articles for use to, can be convenient for the user to purchase on line and recommend vision correction articles for use, improve user's use and experience.
The recommendation method for the electronic commerce of the embodiment of the disclosure can not only realize recommending recommended vision correction articles matched with the vision test parameters and the test types of the users, but also realize recommending recommended vision correction articles matched with the actual pupil distance of the users, and can realize the recommendation accuracy of the vision correction articles so as to meet the individual requirements of different users.
In any embodiment of the present disclosure, the recommendation method for electronic commerce according to the present disclosure is applied to a live broadcast room with goods, and the vision correction product is exemplified by glasses, and the glasses recommendation process mainly includes the following steps:
1. the shopping cart in the live broadcast room is added with glasses type commodities.
2. The user selects to try on the glasses, the face image of the user is collected through the front camera, and the glasses are subjected to glasses trying rendering according to the glasses selected by the user, so that the effect that the glasses special effect is tried on by the user can be presented.
3. The user selects optometry, selects a test type, measures the distance between the user and the electronic equipment through a front camera or a distance sensor (or a distance sensor), and displays a vision test image matched with the distance on a display interface of the electronic equipment, so that the near-sighted degree or far-sighted degree of the user and the optometry information such as the pupil distance can be determined according to the voice information, the image information or the touch information monitored by the user.
4. After the user selects the glasses frame or the lenses, the sizes, the lens degrees and the like of the glasses are automatically matched according to the optometry information of the user, and the final configuration, the price information and the like of the recommended glasses are determined so that the user can place orders.
In summary, when the recommendation method for electronic commerce provided by the present disclosure is applied to a live broadcast room with goods scene, the rate of taking goods of the vision correction product can be increased, the quantity of orders with goods can be increased, for a live broadcast platform, the GMV (Gross transaction or total transaction) index of the goods with goods can be increased, for a main broadcast, a considerable passive income can be increased, and meanwhile, a fan user can conveniently purchase related vision correction products when browsing articles and videos.
In correspondence with the recommendation method for electronic commerce provided in the embodiments of fig. 1 to 8, the present disclosure also provides a recommendation apparatus for electronic commerce, and since the recommendation apparatus for electronic commerce provided in the embodiments of the present disclosure corresponds to the recommendation method for electronic commerce provided in the embodiments of fig. 1 to 8, the implementation manner of the recommendation method for electronic commerce provided in the embodiments of the present disclosure is also applicable to the recommendation apparatus for electronic commerce provided in the embodiments of the present disclosure, and will not be described in detail in the embodiments of the present disclosure.
Fig. 10 is a schematic structural diagram of a recommendation device for electronic commerce according to a seventh embodiment of the present disclosure.
As shown in fig. 10, the recommendation apparatus 1000 for electronic commerce may include: a first obtaining module 1001, a first processing module 1002, a second obtaining module 1003, a determining module 1004, and a second processing module 1005.
The first obtaining module 1001 is configured to obtain a distance and a test type between a target object and an electronic device.
The first processing module 1002 is configured to query and display a vision test image matching the test type and distance; wherein the vision test image is used for testing the vision of the target object.
A second obtaining module 1003, configured to obtain feedback information of the target object on the vision test image;
and a determining module 1004, configured to determine, according to the feedback information, a vision test parameter of the target object.
And the second processing module 1005 is configured to query and display the recommended vision correction product matched with the vision test parameters and the test types according to the vision test parameters and the test types.
In a possible implementation manner of the embodiment of the present disclosure, the vision test image includes a test pattern, the feedback information includes voice information, and the voice information is obtained by monitoring a target object through a sound sensor in the electronic device; a determining module 1004 for: carrying out voice recognition on the voice information to obtain text information; matching the text information with semantic information associated with the test pattern; and determining vision test parameters of the target object according to the matching result of the text information and the semantic information.
In a possible implementation manner of the embodiment of the present disclosure, the number of the vision test images is multiple, and the determining module 1004 is configured to: determining a first vision test image from the plurality of vision test images; determining vision test parameters of the target object according to the labeled vision parameters corresponding to the first vision test image; when the plurality of vision test images are displayed according to the sequence of the corresponding marked vision parameters from large to small, the text information and the semantic information corresponding to the first vision test image are not matched, and the text information and the semantic information corresponding to each vision test image displayed before the target vision test image are matched; when the plurality of vision test images are displayed in the order from small to large according to the corresponding labeled vision parameters, the text information and the semantic information corresponding to the first vision test image are matched, and the text information and the semantic information corresponding to each vision test image displayed before the target vision test image are not matched.
In a possible implementation manner of the embodiment of the present disclosure, the vision test image includes a test pattern, the feedback information includes first image information, and the first image information is obtained by monitoring a target object through an image sensor in the electronic device; a determining module 1004 for: performing finger orientation recognition on the first image information to determine a target orientation of the target object finger; matching the target orientation to a reference orientation associated with the test pattern; and determining vision test parameters of the target object according to the matching result of the target orientation and the reference orientation.
In a possible implementation manner of the embodiment of the present disclosure, the number of the vision test images is multiple, and the determining module 1004 is configured to: determining a second vision test image from the plurality of vision test images; determining vision test parameters of the target object according to the labeled vision parameters corresponding to the second vision test image; when the plurality of vision test images are displayed according to the visual parameters marked correspondingly in the descending order, the target orientation corresponding to the second vision test image is not matched with the reference orientation, and the target orientation corresponding to each vision test image displayed before the second vision test image is matched with the reference orientation; when the plurality of vision test images are displayed in the order from small to large according to the corresponding marked vision parameters, the target orientation corresponding to the second vision test image is matched with the reference orientation, and the target orientation corresponding to each vision test image displayed before the second vision test image is not matched with the reference orientation.
In a possible implementation manner of the embodiment of the present disclosure, the vision test image includes a test pattern, the feedback information includes touch information, and the touch information is obtained by monitoring a touch operation of a control on a display interface of the electronic device; a determining module 1004 for: determining a target control triggered by the target object according to the touch information; matching the target control with a reference control associated with the vision test image; and determining vision test parameters of the target object according to the matching result of the target control and the reference control.
In a possible implementation manner of the embodiment of the present disclosure, the number of the vision test images is multiple, and the determining module 1004 is configured to: determining a third vision test image from the plurality of vision test images; determining vision test parameters of the target object according to the labeled vision parameters corresponding to the third vision test image; when the plurality of vision test images are displayed according to the sequence of the corresponding marked vision parameters from large to small, the target control corresponding to the third vision test image is not matched with the reference control, and the target control corresponding to each vision test image displayed before the third vision test image is matched with the reference control; when the plurality of vision test images are displayed in the order from small to large according to the corresponding marked vision parameters, the target control corresponding to the third vision test image is matched with the reference control, and the target control corresponding to each vision test image displayed before the third vision test image is not matched with the reference control.
In a possible implementation manner of the embodiment of the present disclosure, the second processing module 1005 is configured to: determining at least one recommended vision correction item from the plurality of candidate vision correction items that matches the vision testing parameters and the test type; at least one recommended vision correction article is displayed.
In one possible implementation of the disclosed embodiment, the recommended vision correction appliance includes recommended eyeglasses; a second processing module 1005, configured to: displaying a plurality of candidate frames; in response to a selection operation of a first target frame of the plurality of candidate frames, determining at least one first target lens matching the vision test parameters and the test type from among a plurality of candidate lenses associated with the first target frame; determining at least one recommended eyeglass according to the first target eyeglass frame and the at least one first target lens; presenting at least one recommended eyewear; or displaying a plurality of candidate lenses matched with the vision test parameters and the test types; in response to a selection operation of a second target lens of the plurality of candidate lenses, acquiring at least one second target frame associated with the second target lens; determining at least one recommended spectacle according to the at least one second target spectacle frame and the second target spectacle lens; at least one recommended eyewear is presented.
In a possible implementation manner of the embodiment of the present disclosure, the recommendation apparatus 1000 for electronic commerce may further include:
and the acquisition module is used for acquiring the face image of the target object.
And the third acquisition module is used for acquiring a target image obtained by rendering the face image, wherein the target image is obtained by rendering the face image according to the recommended vision correction product.
And the display module is used for displaying the target image.
In a possible implementation manner of the embodiment of the present disclosure, the second processing module 1005 is configured to: performing pupil distance detection on third image information in the feedback information to obtain a reference pupil distance of the target object in the third image information; determining the actual interpupillary distance of the target object according to the reference interpupillary distance and the distance; inquiring at least one recommended vision correction article matched with the test type, the vision test parameters and the actual pupil distance; displaying at least one recommended vision correction article.
In a possible implementation manner of the embodiment of the present disclosure, the second processing module 1005 is configured to: responding to the trigger operation of a setting control for displaying the information of the vision correction supplies, and displaying a vision correction supply list; the vision correction article list comprises target article information of at least one recommended vision correction article, and the target article information comprises access links of the recommended vision correction articles.
The recommendation device for electronic commerce of the embodiment of the disclosure obtains the test type and the distance between the target object and the electronic device; inquiring and displaying a vision test image matched with the test type and the distance; wherein the vision test image is used for testing the vision of the target object; acquiring feedback information of the target object on the vision test image, and determining vision test parameters of the target object according to the feedback information; and inquiring and displaying the recommended vision correction product matched with the vision test parameters and the test types according to the vision test parameters and the test types. From this, can realize carrying out the optometry for the user through the higher electronic equipment of prevalence to based on the optometry result, recommend the vision correction articles for use that match with the optometry result to the user, thereby the user can purchase the vision correction articles for use on line, and need not to purchase the vision correction articles for use through entity shop optometry under the user line, can reduce the degree of difficulty that the user purchased the vision correction articles for use.
To implement the above embodiments, the present disclosure also provides an electronic device, which may include at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor, the instructions being executable by the at least one processor to enable the at least one processor to perform the method for recommending e-commerce proposed by any of the above-mentioned embodiments of the present disclosure.
In order to achieve the above embodiments, the present disclosure also provides a non-transitory computer-readable storage medium storing computer instructions for causing a computer to execute the recommendation method for electronic commerce proposed in any one of the above embodiments of the present disclosure.
To achieve the above embodiments, the present disclosure also provides a computer program product including a computer program, which when executed by a processor, implements the recommendation method for electronic commerce proposed by any of the above embodiments of the present disclosure.
The present disclosure also provides an electronic device, a readable storage medium, and a computer program product according to embodiments of the present disclosure.
FIG. 11 shows a schematic block diagram of an example electronic device that may be used to implement embodiments of the present disclosure. The electronic device may include the server and the client in the foregoing embodiments. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular phones, smart phones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be examples only, and are not meant to limit implementations of the disclosure described and/or claimed herein.
As shown in fig. 11, the electronic device 1100 includes a computing unit 1101, which can perform various appropriate actions and processes in accordance with a computer program stored in a ROM (Read-Only Memory) 1102 or a computer program loaded from a storage unit 1108 into a RAM (Random Access Memory) 1103. In the RAM1103, various programs and data necessary for the operation of the electronic device 1100 may also be stored. The calculation unit 1101, the ROM 1102, and the RAM1103 are connected to each other by a bus 1104. An I/O (Input/Output) interface 1105 is also connected to the bus 1104.
A number of components in electronic device 1100 connect to I/O interface 1105, including: an input unit 1106 such as a keyboard, mouse, or the like; an output unit 1107 such as various types of displays, speakers, and the like; a storage unit 1108 such as a magnetic disk, optical disk, or the like; and a communication unit 1109 such as a network card, a modem, a wireless communication transceiver, and the like. The communication unit 1109 allows the electronic device 1100 to exchange information/data with other devices through a computer network such as the internet and/or various telecommunication networks.
The computing unit 1101 can be a variety of general and/or special purpose processing components having processing and computing capabilities. Some examples of the computing Unit 1101 include, but are not limited to, a CPU (Central Processing Unit), a GPU (graphics Processing Unit), various dedicated AI (Artificial Intelligence) computing chips, various computing Units running machine learning model algorithms, a DSP (Digital Signal Processor), and any suitable Processor, controller, microcontroller, and the like. The calculation unit 1101 performs the respective methods and processes described above, such as the recommendation method for electronic commerce described above. For example, in some embodiments, the recommendation method for electronic commerce described above may be implemented as a computer software program tangibly embodied in a machine-readable medium, such as storage unit 1108. In some embodiments, part or all of the computer program may be loaded and/or installed onto the electronic device 1100 via the ROM 1102 and/or the communication unit 1109. When the computer program is loaded into the RAM1103 and executed by the computing unit 1101, one or more steps of the recommendation method for e-commerce described above may be performed. Alternatively, in other embodiments, the computing unit 1101 may be configured by any other suitable means (e.g., by means of firmware) to perform the recommendation method for e-commerce described above.
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuitry, FPGAs (Field Programmable Gate arrays), ASICs (Application-Specific Integrated circuits), ASSPs (Application Specific Standard products), SOCs (System On Chip), CPLDs (Complex Programmable Logic devices), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, receiving data and instructions from, and transmitting data and instructions to, a storage system, at least one input device, and at least one output device.
Program code for implementing the methods of the present disclosure may be written in any combination of one or more programming languages. These program codes may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus, such that the program codes, when executed by the processor or controller, cause the functions/operations specified in the flowchart and/or block diagram to be performed. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. A machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a RAM, a ROM, an EPROM (Electrically Programmable Read-Only-Memory) or flash Memory, an optical fiber, a CD-ROM (Compact Disc Read-Only-Memory), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a Display device (e.g., a CRT (Cathode Ray Tube) or LCD (Liquid Crystal Display) monitor) for displaying information to a user; and a keyboard and a pointing device (e.g., a mouse or a trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user can be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic, speech, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a back-end component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such back-end, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: LAN (Local Area Network), WAN (Wide Area Network), internet, and blockchain Network.
The computer system may include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The Server may be a cloud Server, which is also called a cloud computing Server or a cloud host, and is a host product in a cloud computing service system, so as to solve the defects of high management difficulty and weak service expansibility in a conventional physical host and a VPS (Virtual Private Server). The server may also be a server of a distributed system, or a server incorporating a blockchain.
It should be noted that artificial intelligence is a subject for studying a computer to simulate some human thinking processes and intelligent behaviors (such as learning, reasoning, thinking, planning, etc.), and includes both hardware and software technologies. Artificial intelligence hardware technologies generally include technologies such as sensors, dedicated artificial intelligence chips, cloud computing, distributed storage, big data processing, and the like; the artificial intelligence software technology mainly comprises a computer vision technology, a voice recognition technology, a natural language processing technology, machine learning/deep learning, a big data processing technology, a knowledge map technology and the like.
Deep learning is a new research direction in the field of machine learning. It is an intrinsic rule and a presentation hierarchy of learning sample data, and information obtained in the learning process is very helpful for interpretation of data such as characters, images and sounds. The final aim of the method is to enable a machine to have analysis and learning capabilities like a human, and to recognize data such as characters, images and sounds.
Cloud computing (cloud computing) refers to a technology architecture that accesses a flexibly extensible shared physical or virtual resource pool through a network, where resources may include servers, operating systems, networks, software, applications, storage devices, and the like, and may be deployed and managed in an on-demand, self-service manner. Through the cloud computing technology, high-efficiency and strong data processing capacity can be provided for technical application and model training of artificial intelligence, block chains and the like.
According to the technical scheme of the embodiment of the disclosure, the test type and the distance between the target object and the electronic equipment are obtained; inquiring and displaying a vision test image matched with the test type and the distance; wherein the vision test image is used for testing the vision of the target object; acquiring feedback information of the target object on the vision test image, and determining vision test parameters of the target object according to the feedback information; and inquiring and displaying the recommended vision correction product matched with the vision test parameters and the test types according to the vision test parameters and the test types. From this, can realize carrying out the optometry for the user through the higher electronic equipment of prevalence to based on the optometry result, recommend the vision correction articles for use that match with the optometry result to the user, thereby the user can purchase the vision correction articles for use on line, and need not to purchase the vision correction articles for use through entity shop optometry under the user line, can reduce the degree of difficulty that the user purchased the vision correction articles for use.
It should be understood that various forms of the flows shown above may be used, with steps reordered, added, or deleted. For example, the steps described in the present disclosure may be executed in parallel, sequentially or in different orders, and are not limited herein as long as the desired results of the technical solutions proposed in the present disclosure can be achieved.
The above detailed description should not be construed as limiting the scope of the disclosure. It should be understood by those skilled in the art that various modifications, combinations, sub-combinations and substitutions may be made, depending on design requirements and other factors. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present disclosure should be included in the scope of protection of the present disclosure.