[go: up one dir, main page]

WO2025165650A1 - Method and system to calculate and display knowledge credibility values - Google Patents

Method and system to calculate and display knowledge credibility values

Info

Publication number
WO2025165650A1
WO2025165650A1 PCT/US2025/012826 US2025012826W WO2025165650A1 WO 2025165650 A1 WO2025165650 A1 WO 2025165650A1 US 2025012826 W US2025012826 W US 2025012826W WO 2025165650 A1 WO2025165650 A1 WO 2025165650A1
Authority
WO
WIPO (PCT)
Prior art keywords
domain
data
knowledge
individual
individuals
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
PCT/US2025/012826
Other languages
French (fr)
Inventor
Lowell S. STADELMAN
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Publication of WO2025165650A1 publication Critical patent/WO2025165650A1/en
Pending legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N5/00Computing arrangements using knowledge-based models
    • G06N5/02Knowledge representation; Symbolic representation
    • G06N5/022Knowledge engineering; Knowledge acquisition
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/06Resources, workflows, human or project management; Enterprise or organisation planning; Enterprise or organisation modelling
    • G06Q10/063Operations research, analysis or management
    • G06Q10/0631Resource planning, allocation, distributing or scheduling for enterprises or organisations
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/06Resources, workflows, human or project management; Enterprise or organisation planning; Enterprise or organisation modelling
    • G06Q10/063Operations research, analysis or management
    • G06Q10/0639Performance analysis of employees; Performance analysis of enterprise or organisation operations
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/10Office automation; Time management
    • G06Q10/105Human resources
    • G06Q10/1053Employment or hiring

Definitions

  • This invention provides a method and system to efficiently create a transparent, naturally biased, calculation of an individual’s interaction with knowledge from ranked data origins based on trust. It evaluates, calculates, scores, compares and displays effort and depth within a knowledge domain. It provides an understanding for an individual’s knowledge regarding its relevance to industry and shows when similar individuals were hired by respected companies. [004]
  • One embodiment of this invention provides employers with a trusted indication of the depth of knowledge and capabilities of an individual as well as their standing in comparison to other organizations, and well recognized companies who have hired individuals with similar knowledge depth.
  • a second embodiment of this invention provides professionals a trusted means to support their claims of accomplishment, command, and depth of understanding in a domain-of- knowledge.
  • a third embodiment of this invention provides an apples-to-apples means of comparison based on a decomposition of why employers hire talent and what they look for. It obtains a broad understanding from multiple metrics that can be indicative outside of a university setting. For instance, this invention considers the practice of knowledge regardless of where it occurs. Most practice is conducted in a business setting where there are real consequences, risks and opportunities with hard indicators of success and failure or approval and disapproval.
  • a fourth embodiment of this invention provides a timeline of knowledge, projects, people, and positions of responsibility while being based on objective data and trust.
  • knowledge credibility is a rating within a field or domain-of-knowledge based on the following: Time expending effort within said domain. The depth of knowledge within said domain; the activity they’ve conducted with other individuals that are experts within said domain; The activity they’ve conducted with entities such as companies, or government offices that are experts within said domain; the value from exchange of a scarce resource caused by their expertise, or for their expertise within said domain; the acknowledgement from other individuals who are considered experts within said domain; Hie uniqueness and difficulty of their accomplishments within a said domain; and finally their positions of responsibility within a said domain.
  • Fig. 0 is a map of the symbology used for the drawings.
  • Fig. 1 is a diagram of an example computing system in which the claims of this invention may be implemented but tlie invention may be used by electronic systems with more advanced features.
  • Fig 2/1 is an illustration of the system components of the invention when a 3 rd party application is running on the users computing device.
  • Fig. 2/2 is an illustration of the system components relationship when a 3 rd party application is a virtual application.
  • Fig. 3/1 is an example of the visual display of this invention either in a browser or an application.
  • Fig. 3/2 is an example display of selectable options that are used for calculations of this invention.
  • Fig. 3/3 is an example display of the data selection used in calculations for this invention.
  • Fig. 3/4 is another example display of the data selection used in calculations for this invention.
  • Fig. 4 is a background/daemon application on the user’s machine that records and uploads data used by this invention for analysis.
  • Fig. 5 displays the server component of the invention.
  • Fig. 6 is a diagram of an interface such as an API that communicates metrics data from external software such as from third party companies.
  • Fig. 7 is a diagram of an application that calculates and displays Fig. 3/1.
  • Fig. 8 is a diagram of the domain identification or classification process that displays how
  • Fig. 9 is a diagram of a timeline indicating an individual’ s/observed-user s professional career, and key data that demonstrates their capabilities.
  • Program modules may include one or more of routines, programs, objects, variables, commands, scripts, functions, applications, components, data structures, and so forth, which may perform particular tasks or implement particular abstract data types.
  • the disclosed embodiments may also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network.
  • program modules may be located in local and/or remote computer storage media including memory storage devices.
  • FIG. 1 provides a generalized example of a computing system that is capable of the innovations described by the claims of this invention.
  • the computing system is not intended to suggest a limitation of functionality.
  • the invention described may be implemented in a wide range of computing systems including personal devices such as phones laptops and web servers.
  • Fig. 2/1 is an embodiment that displays an example of the system elements of this invention and the relationship with the other elements wherein a 3 rd party application/software Fig. 2/1 (201) is interacting with this invention on the user’s personal computing device.
  • This invention will operate with one to any number of personal computing devices.
  • the UI (User Interface) (300) of the personal computing device would likely be either a browser or an application (700) depicted further in Fig. 7 (700). If it is a local software application running in the same personal computing device, then it may receive its data from local storage (404) and may receive data and calculations from remote cloud like services. If it is a browser, the preferred method is to receive the processed information from cloud like services.
  • Cloud like services include a server system Fig.
  • Fig. 2/1 500
  • Fig. 5 500
  • infomration from raw data or organized data from the storage system
  • the following is the data needed from the 3 rd party /external software Fig. 2/1 (200) such as a text processor, Software Development Tool, or any softw are that is related to a field or domain of knowledge that is run in the local environment.
  • the third-party software would include its’ Vendor ID (406) and if needed a Project ID (407) as well as the source data (201).
  • a Category of Trust rating (301) shown as an association.
  • the vendor should not control the Category of Trust that is assigned to it. Further data, as shown in the Software Interface Fig 6, as needed, would be provided.
  • Data from the 3 rd party/extemal software Fig. 2/1 (201) is sent either to the cloud using the Software Interface API (600), or to a daemon/background application (400), further depicted in Fig. 4 (400) which then stores the data in local storage on the personal computing device (404).
  • Tire daemon/background application may also synchronize data with the cloud when connected.
  • Fig. 2/1 further shows that the User Interface (300) may be displayed either in a browser, or as a part of an application (700).
  • Fig. 2/2 displays another embodiment of system elements where the 3 rd party application Fig. 2/2 (221) is running in a virtual environment and is hosted in a 3 rd party cloud (220). Users interact w ith the data virtually which is usually through a browser.
  • the 3 rd party virtual application/cloud (220) sends the necessary data to this invention's servers (500) that may operate in the same cloud.
  • the Category of Trust (301) is also shown as an association, and likewise would not be controlled by the 3 rd party application in most conditions.
  • Data, including the Vendor ID (406), the Project ID (407), the source data (201) would be included with all other needed data of the Software Interface (600) wftich is sent to the server (500).
  • the server (500) When access is desired by a user, they may observe the metrics through tire user interface (300) that would be provided by a browser.
  • references to 3 rd party software or application could be any embodiment where the software is external.
  • the third party may be the same entity or another entity that is providing interaction with knowledge and where its’ data is sent to this invention.
  • Tire observed user is the individual whose knowledge credibility is displayed.
  • the observing user or requesting user is the user w ho is viewing the display of the metrics.
  • Other individuals are generally users who have interaction with the observed user within a domain of knowledge.
  • Fig. 3/1 depicts an example of the UI (User Interface) for the results of a request based on the users ID Fig. 4 (405).
  • This exemplar shows a series of rows containing gauges and information that explains an observed user or entities knowledge metrics and Knowledge Credibility within a Domain.
  • Fig. 3/1 (301) is an example that allows the observer/user to select one or more categories of information that is used in the analysis and calculations conducted per category. Categories are an enumeration that represents the concreteness, or trust that the data source is free of errors and related to the observed user, The number of categories may be beyond the four that are shown. The following four categories are an example of how data may be divided and used in a hierarchy-of-trust.
  • Category I of Fig. 3 (301) would be concrete information that is captured by software that can provide metrics directly due to its involvement in the practice or interaction within a knowledge domain.
  • Such practice may be the use of an instrumented baseball bat and integrated camera’s where the user of the instrumented system is measured in performance and in numbers of hits vs. strikes, and time such as hours of practice. It may be from a learning system where the software observes the users interaction as they learn.
  • These metrics can originate from 3 rd party software such as Software Engineering Tools like an IDE where the software is used to write software. It may also, for example, be tools that are used to search for errors such as a peer review system. It may further be from a video chat system where a user is discussing a domain-of- knowlcdgc, and analyzes the discussion to understand a user’s interaction with another individual.
  • Category II of Fig. 3/1 (301) would be non-concrete information such as an acknowledgement from others, but where an observer, participant, record has a stake or loss in its reputation in the validity of its source connected to the data.
  • An example would be a research paper where its authorship was not observed but has been published in a digital library such as IEEE or Psychology publications.
  • this category there is an implantation of accountability for original authorship and accuracy of records, and agreement is shown, for example, by the number of citations and or peer reviews.
  • acknowledgements provide strong authority for knowledge credibility but could contain inaccuracies as to who was the original creator.
  • Category III of Fig. 3/1 (301) is a non-concrete source of information, where a rating is related to a scarce resource such as sales or captured business. Where the is greater opportunities for data manipulation or errors, but there is reputation gained and lost if the infonnation is inaccurate or the source is incorrect.
  • An example of this may be where a manager has provided a review that contains the information used for knowledge credibility. It is far weaker than categories I & II but still holds stronger authority than if reported by the observed user themselves.
  • Category IV of Fig. 3/1 (301) is a weak source of information, where the rating comes from acknowledgement from others such as votes from a website or social media platform. There is little at risk for the rater when they give a false vote and frequently votes are paid for by the thousands. However, through volumes of votes in relation to its origin and accuracy provides no stronger of an authority than if reported by the observed user themselves. This would be a rating for a social media influencer.
  • Fig. 3/1 (307) displays rows of gauges and data per domain. Rows, as shown, are configurable based upon the selected categories above. Shown is the configuration with categories I and II selected. The configuration of rows and columns are as necessary to display the domains, and the necessary values used in the knowledge credibility score. Each row displays the domain (302) relevant to the row of gauges/graphs and information. The domain is always shown, and we show physics for this row’s domain. (303) is the value of a scarce resource for that domain that the observed user or entity has interacted with or practiced. Value exists in all four categories. In the example, value is related to a time period e.g. a year, 3 weeks. 2 months but is optional.
  • (304) is representative of effort in a measurement in time and is available in all 4 categories. Here we have shown a donut chart with the total time in hours and percentages given with the type of interaction. Shown is Practice, Study Time, Discuss, and Create.
  • the third column at (305) is fortesting knowledge and displays points. Here is displayed the sum for all scores within the domain and it is shown in a donut chart with percentages towards the type of testing and learning. E.g. free recall vs. a standard multiple -choice test.
  • the fourth column is (306) is an example of an optional column based upon the selected categories. For category I only it would not be displayed since it includes methods of capture that are not concrete. For categories II through IV it would represent the number of citations for category II or votes in category' III and IV.
  • Fig. 3/2(308a) One exemplar for use of the percentile is the display provided in Fig. 3/2(308a). This percentile discussion is around Fig. 5 (511) Percentile by Domain. Discussion and interaction data captures the knowledge within a domain between tire multiple individuals. To transparently show the relationship, it is displayed along with the understanding that individual A has interacted with top n% of individuals in said domain as shown in Fig. 3/4 (332) the other individuals name, and (338) their rating. Discussion, and interaction data alone, without calculations from other individuals, may be used. It is convenient and makes reliance on key individuals unnecessary. However, there are cases where information would be unlikely to be captured by this invention. An exemplar would be Robert Oppenheimer and the team that worked with him to end WWII. Individuals who had interacted with him, or were a part of that organization, would receive no knowledge credibility outside of their organization for the unique depth of knowledge they were capable of providing. This is discussed further in Percentile Calculation Strategies [0081],
  • Fig. 3/1 (309) the area of data used in calculations may be selected by a second menu Fig. 3/2 (309a).
  • Tire displayed information would be shown, for example, in another page that provides a plurality of information related to that domain and the area that was collected.
  • Fig. 3/3 of the displayed data for Entities Hie relevant domains to tire row in Fig 3/1 (309) would be provided as they are at (317).
  • a table would display information about the related entities, knowledge, people, and projects within the selected domain. Each row would display information about each element.
  • a preferred organization of a row would show, the relationship with the observed user (person A), and other relevant data so that the percentile and values are transparent.
  • the columns preferably would be organized to show the entity name (312), the relation with the observed user (313).
  • the time involved with the entity (314). The value from a scarce resource (315) or in this case ‘income’ that was exchanged for the knowledge.
  • the sub-domains (316) that were involved with that entity.
  • the entities rating (318) data provided for the rating of the entity. And a description for the entity in relation to the domain (319).
  • Multiple rows of entities are displayed and could be scrolled through as shown at (320).
  • Tire displayed information may be changed by selecting another area within a menu as shown at (311). Each area would display similar data to provide transparency and understanding of the percentile or score that an observed user is given per entity choice shown in Fig. 3/1 (308) and Fig. 3/2 (308a).
  • FIG. 3/4 is the ‘people’ page that show the other individuals within a domain, or in other words “people 1 - n” that person A has interacted with.
  • Tire layout is similar to Fig. 3/3.
  • Fig. 3/4 (331) is the other area’s selection menu.
  • (332) is the individual's name.
  • (333) the related domain.
  • (334) the time in the domain.
  • (335) the income over the duration.
  • the selected domain. (338)
  • Tire background, or daemon application Fig. 4 is an embodiment showing a user’s computing device, alternative embodiments may be as provided in a “virtual environment”.
  • the daemon application captures, translates, and stores metrics from the data source (201) which may be a 3 rd party application, for use by the invention’s other components as shown in Fig 2/1 and Fig. 2/2.
  • the background application does not need a display and is called by 3 rd party software when needed or may be running in an environment that contains the domain ID process (800) and allows the third-party software, containing the data source (201), to run inside of the environment (not shown).
  • the background application stores outputs from the data source (201) through the use of a bridge interface (402), or alternatively may use the API software interface (600), that may be used to interface with a variety of purposes such as an IDE software development tool, text editing program, accounting software, drafting software and or learning software where the necessary variables are recorded.
  • the data captured from the data source is then stored locally and or remotely so that it may be combined and used in the calculations for the scoring systems Fig. 5 ( 11) Percentile by Domain; (507, 508, 509, 510) Time, Points, Value, and Votes respectively; (514) Knowledge requested by industry comparison: (515) Users with equivalent metrics work entity: as well as the timeline (521).
  • Tire data is then passed to the storage system (404) which may be a typical database, block-chain connected through an API, or simply storing tire variables in memory such as in a string, object code, byte code or even binary.
  • the user’s unique id is stored (405).
  • the third-party software vendor ID is stored (406).
  • the project ID if any is stored (407).
  • Tire total time of effort is recorded for this session is combined with the accrued effort time (408) and categorized as creating (408a), reading, observing, or listening (408b), discussing (408c), or practicing (408d). If any monetary value or value from a scarce resource has been exchanged it is combined with the accrued value (409) along with the type of value (410) e.g. dollars, bitcoin, EU for example.
  • the Data Source (201) which may be a 3 rd party app, provides the domain IDs through the Domain ID Process (800) which may be provided through an API, or through the 3 rd party apps own processes. The domains are recorded for the session (411). Any people associations are stored (512). The category of data certainty/trust is stored (301).
  • Fig. 5 illustrates the server component of the invention and the elements that it communicates with.
  • the server At Fig. 5 (500) is the server.
  • the preferred implementation is in a cloud like environment where there would likely be multiple copies of the server running simultaneously. Implementations of this part of the invention would conduct the following: 1) Receiving data from 3 rd party/extemal software systems or the damien/background application (400). 2) Processing the information so that it may be stored and retrieved efficiently at high access rates. And 3) Sending tire processed data to a browser or application for display, or sending the processed data for other uses.
  • Received data first enters through security (501).
  • Received data includes: Data (519) from 3 rd party Bank and Pay Systems (518) to retrieve transactions and aggregate the necessary monetary values directly from the trusted source;
  • Received data for metrics (601) and bank systems (518), if necessary, is translated into usable data through a bridge Fig. 5 (502), checked for data-integrity to ensure it can be trusted (503), then synchronized (504) and stored in a data storage system (516) along with the category of data certainty/trust Fig. 3/1 (301).
  • Tire data storage system Fig. 5 is preferably a database but could also be blockchain provided through an API to further provide data integrity, or it could simply storing CSV, string, object code, machine code, or ultimately binary data in memory.
  • Requests (517) to the server endpoints must include the necessary information to retrieve it in a SQL query. A common request would be to see individual ‘A’s Knowledge Credibility. Such a call could come from a link, a QR-Code, a brow ser form, and or a web-crawl request. Responses would preferably be protected, if so chosen by tire observed user or for other purposes, and either return a message that the data is not available, or the user’s metric data Fig. 3/1 Fig 3/3 Fig 3/4, and Fig 9 depending on the request, authorization, and permission settings.
  • a browser or application (700) would display the information after the server (500) is called by the correct endpoint and providing the necessary' information to retrieve individuals/entities or groups and display them as shown in Fig. 3/1.
  • the call containing the entity id or user id (405) is received by the server (500) and may optionally contain the category of data certainty/trust (301) to be used. If the category is not included, the system will use a default category. It may also contain the domain (411) the user is interested in or other data as needed.
  • Data is then retrieved from the storage system (516) preferably using a SQL request. It is organized by its catcgory/s of trust (301) and by each relevant domain (411) and contains calculations for interaction time (507) discussed further in paragraph [0053], It further contains: Calculations for Points (508) from testing; Calculations for Value (509) are from a common scarce resource such as the US Dollar and is an indicator that interactions are relevant to industry and in demand. Acknowledgements such as votes and or citations, if available by the category' selected, would be included in the calculation (510).
  • Interaction time Fig. 5 (507) comprises of the elements shown in Fig. 4 create (408a), study (reading, observing, listening) (408b), discuss (408c), and practice (408d).
  • Create time (408a) is the activity of creating knowledge for the purpose of consumption by others and where the effort may be captured by a trusted system.
  • Study time (408b) are passive and active actions similar to observing videos, reading, using flash cards taking tests or self-tests where the activity may be captured by a trusted system.
  • Discuss time (408c) is the activity' of discussing information which may be in a lecture, a tutoring system, or in a business environment such as an engineering meeting, department meeting, or similar discussions w here the discussion may be captured by a trusted system.
  • -Practice time (408d) is the activity of being actively engaged in working with information within a domain-of-knowledge and the activity may be captured by a trusted system.
  • Tire percentile by domain calculation Fig 5 (511) is a complex calculation that may displayed as it does in Fig 3/1 (308). Tire data would change as a user selected different entities Fig. 3/2 (308b). Upon a change, a possible implementation would make a request to the server for another calculation in relation to the new entity and a percentile in relation to the observed user would be displayed at Fig 3/2 (308a).
  • the Percentile by Domain Process Fig. 5 (512) also considers the category of data certainty/tmst (301) in its calculation. It further depends on the calculation strategy (513) which is discussed in paragraph [0081],
  • the Percentile by Domain Process includes a selection from time Fig.
  • Tire ‘Knowledge requested by industry comparison’ provides the percentage of the information that the observed user has interacted with and has an understanding of. Tire output is as shown in Fig. 3/1 (310) on the top row. Uris is discussed in depth in para [0059] and [0073]
  • the ‘Users with equivalent metrics work entity’ (515), finds similar individuals based on their metrics. Discovers the companies they were hired by or are working for, and provides this as output as shown in Fig 3/1 (310) on the bottom row. This is discussed in depth in para [0072],
  • Fig. 5 (520) browser or (700) application represents the users device where the preceding outputs would be displayed.
  • Fig. 6 is an exemplar of a software interface, API, for 3 rd party or external software or communication through a bridge.
  • 3 rd party software after passing testing and meeting standards for assurances of the accuracy and protection from tampering would receive a vendor number and the proper instructions to communicate with this invention.
  • the vendor number along with an associated category of trust (301) would be stored in the database.
  • the following is the preferred information included: The user id (405).
  • the project description if any (412), a score if any (413), and acknowledgements from others such as votes, citations, an academic grade or score, and or a peer review if any (414). Further included would be any entity associations (512a), people associations (512c), and any positions of responsibility if available (512f) .
  • the captured data would then be sent to the server Fig. 5 (500) or the local background application Fig. 4 (400). Data sent to the server would include processing for integrity assurance (401).
  • Fig. 7 (700) is an alternate embodiment that represents an application for displaying the metrics of this invention and is shown on a user’s device.
  • the elements of this application may also exist as a part of another larger application with broader purposes.
  • the application communicates with local storage which may be a database) and local storage communicates with the background application (400).
  • the application may if connected to the internet, make a request to the server (500) with tire user id (405) and trust category (301).
  • the server would then return the requested data, and calculations as requested either individually or at the same time.
  • the calculations would include (514) ‘Knowledge requested by industry comparison’, (515) ‘Users with equivalent metrics work entity’, and (511) ‘Percentile by domain’.
  • the calculations would include the previous mentioned calculations for the server: (512a) Entities calculation; (512b) Knowledge Calculations; (512c) People calculations; And (512d) Project calculations and are calculated per domain (411) and per category of data certainty /trust (301).
  • Locally stored infomiation would include: the time calculation (507); points calculation (508); value calculation (509); and votes calculation (510) that are also conducted per domain (506) and per category (301).
  • Tire server may also provide (521) Timeline information. These data would be formatted by the Application (700) and displayed in the user interface (300) in the preferred display similar to Fig.s 3/1, 3/2, 3/3 ,3/4 and 9. All communications to the server would be through security (701).
  • Fig. 8 depicts an exemplar of determining the knowledge domain from data that is provided from Fig. 8 (601) where data may be provided from multiple source types that are either classed or unclassed. Where class is referring to data being provided with a domain ID or it hasn’t.
  • One exemplar of a classed system would be an instrumented baseball system that would include a bat, baseball, and a baseball field that could report the performance of a baseball player. In this case the data would be preformatted with domain specific infomiation because of its relationship directly with the performance of a baseball player and thus their knowledge and experience.
  • Another exemplar of classed information may be from a leaming/study application where the domain ID is provided from an institution or by multiple individuals who are unrelated but in agreement.
  • the Category of Trust is provided outside of the data based on the vendor ID.
  • An exemplar of unclassed information would be data from text that is an output from a video-chat conversation, or from text that is written from an observed user. If needed, a bridge (502) would serve to convert data from the source to the properly fonnatted data needed for this invention. API’s provided by this invention, where they are used by 3 rd parties, would also serve to correctly format data for use by this invention. The two cases for the data that comes from inputs is classed or unclassed.
  • the domain ID is considered “classed” (803) and is stored (411).
  • the domain ID is considered unclassed (802) and the domain ID may be derived through an internally provided Al content classifier (804); a “Human in the Loop” or HITL, or through an API provided by a commercial Al company. After the domain ID has been properly identified it is then stored at (411). An indicator may also be included that may indicate the level of certainty that the domain has been correctly provided. If Al is used, and depending on the model’s accuracy because of its implementation, it may.
  • Data that is provided may not be limited to only 1 domain. For example if text data is provided and if the subject is physics, there may be several domains that are relevant such as Particle-Physics, Nuclear-Physics, and Calculus.
  • Depth of knowledge is depth within a domain. It is not the DDK framework. Depth of knowledge would be discovered through the same process of identification as the domain ID discussed in paragraph [0060], An example, the identification of an individual’s understanding of physics could be shown through the multiple levels of dependent parent domains. Depth of knowledge in physics may be shown by time value, and depth in multiple sub-domains and the depth within them such as for classical mechanics and momentum, along with the required algebra to solve for problems. In the exemplar display of metrics Fig. 3/1 (300), Software Engineering is given an amount of time, a value, and or scoring. The subdomains, such as "Software Architecture’ are also displayed with a row and ‘Data Structures’ would also have a row.
  • Each subdomain includes time spent in that domain and a link to the supporting data.
  • the metrics of parent domains do not need to be a sum of the subdomains. Rather they are initially mapped as a key value association. During a period of time, while the data is collected, each increment in value is added to the domains and subdomains that are related.
  • Another example, not shown, is Marketing. Where it’s subdomains such as Digital Marketing would have a row. Digital marketing is a parent to Search Engine Optimization and if the observed user had effort, its metrics would be displayed similarly depending on the Category of Trust the Observer has selected.
  • Knowledge needed by industry Fig. 3/1 (310) is derived from individuals who are associated as working for an industry’. A comparison is made between the observed user who is the target subject of a query and other individuals who are similar to the target. The returned data would include the industry and companies that the similar individuals are working for and a percentage of the knowledge the individuals have. Alternatively, data may be collected from job postings and a comparison may be made with the knowledge of the target subject and the knowledge requested in tire job postings.
  • Achievement or accomplishment selection of the key events may be through the aggregation of data that points to the events and the individuals who are connected through it. The same methods as explained in Fig. 8 could be used to identify or validate these dates and events if they are provided by the observed user or another individual. Where the aggregation is sufficient enough to provide evidence that it occurred. Thus, it may not be a category I since it is not observed but reported by a category II or III data and would be displayed based upon the selected categories of trust. Achievement or accomplishment, in this exemplar, would be stored in the database and associated with tire users unique ID, a date, and other relevant columns for description and data as needed. Achievement or accomplishment may be shown similar to the data in the tables Fig.s 2/3 and 2/4 and in the Timeline Fig 9.
  • Positions of responsibility may be entered similarly through the use of a webform entry by a user that may have restricted access to tire observed user's employer, through web-scraping, through datamining. or through an automated system for example where the employer is connected to a 3 rd party data mediator. Positions of responsibility, in this exemplar would be stored in the database and associated with the users unique ID, a start date, and an end date. Positions of responsibility may be shown similar to the data in the tables Fig.s 2/3 and 2/4 and in the timeline Fig 9.
  • Fig. 9 is an exemplar of the display of a timeline. It provides a chronology of an observed user or entities interaction with knowledge and connects tire dots with a visual display.
  • the exemplar is not meant to limit a timeline, and it may be represented with or without graphical lines or may be text only.
  • a timeline may be any chronological representation.
  • the exemplar displays the company or entity (901), dates in reverse order (902). It associates other key information such as positions of responsibility and significant contributions such as key projects, and awards (903).
  • the timeline further displays others that were associated with tire observed user and their significance (904). Hyperlinks to human readable tables such as shown in Fig.s 3/1. 3/3 and 3/4 may be displayed in the timeline giving the observer the ability to quickly understand the underlying data (905).
  • Timeline calculation organizes the data in the desired chronological order.
  • Tire data is retrieved from tire DB in a SQL query.
  • the query would request the desired data and organizing it by entity, the observed users unique ID, and the start and end dates they had interaction with the entity.
  • Data-Integrity Fig. 5 (503) is common and may be accomplished through hashing algorithms of the data and sending the hash using encryption along with tire data. This ensures that the data is sent from a trusted source, and that it was not changed between its source and the server.
  • Data Synchronization Fig. 5 (504) ensures that data stored is correct at all points. Provides the necessary actions to store the information in database tables so it may be efficiently stored and easily retrieved according to the needs of this invention.
  • a search by a specific domain or sub-domain may be conducted on this platform using an implementation where the observed users were scored based upon the metrics and methods of this invention. This could be through a number of implementations such as a webform or an options list that connects to the webserver.
  • the server would conduct a search using a SQL query to the database, or through a software program specifically developed for this purpose, or through Al.
  • An enumerated list of observed users and entities may be created based on the calculations and methods of this invention that may be transmitted to other entities that would be interested in reporting ratings of observed users such as web-based search engines such as Google or Bing.
  • Observed users would have the option to elect what data is available to be viewed by an observer user or if they may be viewable at all. Further they may elect to remove all data that is related to them.
  • the ‘Knowledge requested by industry’ shown in Fig. 3/1 (310) and calculated in the server, Fig. 5 (514), is derived from job requests and an association with knowledge, time and the metrics that are in a category of trust.
  • Some skillsets may require an individual who may be a social media marketer to have demonstrated experience as a social media influencer.
  • Tire skills needed may be video editing, sound editing, understanding of cameras and lighting, an understanding of communications using appropriate language for the audience, and an understanding of video and image composition to name a few.
  • the comparison of the requested skills and experience in industry- versus the observed user’s skills and experience is shown.
  • Tire calculation can be accomplished using SQL where job postings have been scanned from websites like Linkedln, GlassDoor or uploaded to this invention directly through queries made by hiring managers for example. Normalization of required knowledge could be accomplished through data structures that provide a one-to-many and many-to-one relationship such as provided by a hashtable with a linked list that may be built through the use of Machine Learning where models are trained on large data-sets of job postings. As a job posting is uploaded to storage, the skills are compared to required knowledge for those skills. Tire nodes in the linked-list contain a variable for experience as time in months. When the request for the observed user is made, a comparison can be made by requesting the skill needed by industry, and time with the observed user’s knowledge and time. A percentage or measurement is returned and displayed.
  • the value of knowledge Fig. 3 (303) and Fig. 5 (509) is an important metric because it helps to demonstrate the significance of the observed user and tire knowledge. Included is a time metric since a value is difficult to understand without a time metric.
  • the value may come from multiple sources such as bank account deposits from an employer or a contractee that the observed user is working with.
  • the value may come through the API Fig. 6 (600) provided to applications that, for example, allow the sale of study materials or course videos.
  • the values along with a time stamp would be stored in a database table along with the source and if available the domain-of-knowledge.
  • Tire hours of effort Fig. 3 (304) and fig. 5 (507) is an important metric because it represents the total number of captured hours of effort within a domain-of-knowledge. It can be a positive indicator of an observed user’s dedication and work ethic.
  • the metric of time is broken into sub-elements and in this example, they are study, create, discuss, and practice and there is a total time.
  • the preferred method for calculating time is to sum the total time along with a SQL query to the same database as the previous queries.
  • the reported hours would be included as an entry from devices that connected through an API or directly through the users on-board app that may be running in the background.
  • Tire query would sum the time in each sub-metric and then the total sum may be conducted in the query or in the presentation software. The query would request the time by domain and by the unique ID of the observed user.
  • the points from study, fig. 3 (305) and included in fig. 5 (508) is an important metric because it indicates an outcome from work and learning within a domain-of-knowledge. This is also an observable indication of understanding and would be an important metric for new entrants to the work force. Similar to the previous metrics, the preferred calculation, data-storage, and retrieval. In the example we see that the sub-metrics are related to the types of questions and retrieval. More points may be awarded that provides a preference to the use of logic and through the use of different memory types. Thus the two sub-metrics are testing and free-recall, but others may also be included.
  • the preferred method would be to store information from study applications and advanced learning systems that are connected through the use of an API or that may embed software that would store tire information directly into the Database.
  • Hie query based on the users ID would return the information by its sub metrics as a sum of each domain, and then sum the totals as the overall score.
  • the optional section for citations, votes, etc. . . fig. 3 (306) and fig. 5 (510) are displayed based upon the categories of trust.
  • the information may be introduced by the observed user, captured and validated through a scan of websites such as the IEEE or other professional publication, or reported directly from publications through the API.
  • the category of trust would be determined.
  • the website may report a citation request as a citation, this would be a poor and inaccurate method.
  • a superior form of citation is from a known source where the citation is reported and can be accounted for separately.
  • data from other forms of confirmation of effort and knowledge may come from other sources.
  • Social Media's business model does not reward factual information, it rewards a growth of users, and usage since advertisers need more views and engagement.
  • These types of confirmation are not believable except through abstraction such as that the observed user has followers and engagement.
  • This type of understanding is an important metric to some domains such as marketing and communications. Although the metrics can be faked, thus they are not believed to be free of errors.
  • the data would be captured through an API provided from this invention, or by providing direct access to store memory from the software that is providing the information. For instance, Linkedln may have direct access to the databases referred to earlier.
  • a publication such as IEEE may access through the API or is scanned by software for this invention.
  • Tire preferred method for Categories of Trust is an assignment that is associated with the Originating Source.
  • the originating source is third party software, and is directly connected through an API, and the originating source observes the user creating or editing, discussing, or practicing or studying and testing in a domain-of-knowledge, then it would be category I since it is unquestionably from a user and is unquestionably in a domain-of- knowledge. All un-observed creation of data is not class I.
  • Such examples would be from a repository of information such as a previously created document that is stored in a database. The entry could be provided by an individual or through software that is written to determine the category based on a question fonn provided to the administrator for the third-party software.
  • the ‘Knowledge requested by industry comparison’ Fig. 5 is collected from industry users as they search for users who possess certain knowledge. It is also collected by a voting system, through direct questioning, surveys, and by a comparison of individuals in similar job positions. This data is then stored for later use to calculate and show tire amount of knowledge that the user has interacted with that is relevant to a particular industry. Tire observed user’s data is then compared to a standard of the individuals that tire requester-user sets. The SQL request returns a comparison value against the knowledge requested by the individual. A generic, or default average may also be used.
  • Hie Percentile by Domain Process is calculated from time Fig. 5 (507). points (508), value or significance (509), Votes/Citations if available ( 10), entities (512a) that the observed user has worked with or for; their knowledge (512b) metrics, depth of knowledge, and rarity; the people (512c) they have interacted with and their score or percentile; the projects (512d) they have been involved with; their achievements (512e), and their positions of responsibility (512f).
  • the knowledge metrics may be derived from multiple relevant sub-domains and their sub-metrics. Hours is divided into the sub-metrics as described in this invention and shown in Fig 3/1 and calculated in Fig 5.
  • an Al may report its accuracy and similarly if all categories are used in the calculation, it may report the possible error rate such as 5% error.
  • the error may be calculated by the entirety of a category of trust. If Category 4 is used, an unobserved category, and it is unknown if the data contains errors. E.g. if the votes on a Linkedln post were paid for, the votes would be fake or considered erroneous. Tirus, the error rate would be by the amount that the votes contributed to the whole of the scoring or percentile.
  • Strategy 2 creates a problem. This is a complex calculation because the scoring depends on other individuals who also depend on each other. A naive implementation, even for a small number of individuals, would cause a thread lock problem. Advanced techniques to block, for instance an individual’s (person A) scoring that is dependent on other individuals (persons 1 - n) scoring. Persons 1 - n are dependent upon person A’s scoring. Thus persons 1 - n would need to be completed either with or without person A’s data and may be completed asynchronously using special libraries such as provided in Java by Rice University' by the PCDP library. To make this calculation, strategies would block, remove, or delay ‘person A’ from being scored until all other person’s 1 - n arc completed.
  • Knowledge credibility is a measurement of the probability of correctness or infallibility of an individual, an organization of individuals (such as those at a company), or software (such as Al) when they provide a statement, make a decision, or deliver work within a domain-of- knowledge.
  • An individual may be said to have skills, and the individual may have knowledge in order to have the skills, but skills and knowledge are not the same.
  • a professional mountain-bike racer may have the ability to race down a rocky and difficult path at very steep inclines but not possess the ability to explain how force is applied to maintain stability and direction.
  • expertise and talent may refer to the application to do work without understanding the underlying information needed for such work. They refer to the ‘about' rather than ‘how'.
  • a software developer may have the ability to write software and use API’s to develop a product but not possess the understanding of how processors work, how data may be manipulated using advanced data structures, nor how information is stored in memory.
  • the developer may be considered an expert and to possess skills, but not possess a deeper theoretical understanding.
  • a domain is a division, sub-division/sub-domain, or section of a sub-division within an area of knowledge, practice, concentration, field, or other area of knowledge as recognized by academia or industry. It refers to both STEM and non-STEM fields and would be recognized in industry or academia as such. Some examples would be Mathematics to Calculus to Multivariable division, Law to Divorce Law, Marketing to SEO Optimization, or Physics to Quantum Mechanics. Etc... .
  • Interaction is the activity of creating, reading, observing, listening, or discussing, and practicing.
  • Practice is the activity of being actively engaged within a domain of interest such as practicing business or conducting research and where it is a captured activity by software or other means that is related to a category. Examples of practicing is solving problems, writing software, marketing, communicating, banking, and other practices that are related to STEM (science, technology, engineering, and mathematics) and non-STEM but in the day-to-day conduct of one’s profession or in solving problems such as may be done in academia.
  • STEM science, technology, engineering, and mathematics
  • Study time represents where a user has actively studied the domain be it through reading, observing, listening, or studying for example using a flashcard like system, and where the activity can be captured and represented by its category.
  • Create is where the user is actively creating knowledge for consumption by others within the domain-of-knowledge.
  • Percentile displays the user’s related percentile within a domain-of-knowledge and related to an entity such as a university or employer.
  • Positional and positional data is ranking, hierarchal, and rating data that compares individuals with each other based on metrics or performance.

Landscapes

  • Engineering & Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Human Resources & Organizations (AREA)
  • Theoretical Computer Science (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Strategic Management (AREA)
  • Physics & Mathematics (AREA)
  • Economics (AREA)
  • General Physics & Mathematics (AREA)
  • Marketing (AREA)
  • Software Systems (AREA)
  • Quality & Reliability (AREA)
  • General Business, Economics & Management (AREA)
  • Operations Research (AREA)
  • Data Mining & Analysis (AREA)
  • Development Economics (AREA)
  • Educational Administration (AREA)
  • General Engineering & Computer Science (AREA)
  • Tourism & Hospitality (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Game Theory and Decision Science (AREA)
  • Mathematical Physics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Medical Informatics (AREA)
  • Computational Linguistics (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

This invention provides a method and system to efficiently create a transparent, naturally biased, calculation of an individual's interaction with knowledge from ranked data origins based on trust. It evaluates, calculates, scores, compares and displays effort and depth within a knowledge domain. It provides an understanding for an individual's knowledge; its relevance to industry; shows when similar individuals were hired by respected companies; and provides an Apples-to- Apples comparison of an individual with others at a chosen entity.

Description

CROSS-REFERENCE TO RELATED APPLICATIONS
[001] From the provisional patent application '‘Method and system to calculate and display knowledge credibility values.” U.S. Provisional Patent Application Ser. No. 63/626,282, filed on January’ 29, 2024, by inventor Lowell S. Stadelman, the contents of which are hereby incorporated by reference in its entirety.
BACKGROU N D OF TH E I NVENTION
[002] Capability directly impacts opportunity. However, in most professional environments, even highly competent individuals struggle to establish credibility. Widespread use of data manipulation and fabricated claims has elevated skepticism, particularly in asymmetric settings where professionals lack direct knowledge of another’s experience and background. This effects opportunity in two areas. 1) Highly capable entry level applicants and professionals pivoting into new careers find it difficult when entering unfamiliar industries. Understandably, hiring managers struggle to sort through hundreds or even thousands of responses, and face a reality that over 80% of respondents have embellished their education, previous roles, skills, and knowledge. This environment makes it particularly challenging for legitimate candidates to stand out. 2) In networking and business development, professionals who initiate contact with others — so-called “cold introductions” — encounter substantial barriers. Understandably, many individuals will incorrectly state their abilities during an introduction to get another individual’s attention but lack confidence in their understanding. Decision makers are often guarded during initial introductions and meet new individuals with skepticism, making important first impressions challenging to establish the trust essential for building business relationships.
SU M MARY OF TH E I NVENTION
[003] This invention provides a method and system to efficiently create a transparent, naturally biased, calculation of an individual’s interaction with knowledge from ranked data origins based on trust. It evaluates, calculates, scores, compares and displays effort and depth within a knowledge domain. It provides an understanding for an individual’s knowledge regarding its relevance to industry and shows when similar individuals were hired by respected companies. [004] One embodiment of this invention provides employers with a trusted indication of the depth of knowledge and capabilities of an individual as well as their standing in comparison to other organizations, and well recognized companies who have hired individuals with similar knowledge depth.
[005] A second embodiment of this invention provides professionals a trusted means to support their claims of accomplishment, command, and depth of understanding in a domain-of- knowledge.
[006] A third embodiment of this invention provides an apples-to-apples means of comparison based on a decomposition of why employers hire talent and what they look for. It obtains a broad understanding from multiple metrics that can be indicative outside of a university setting. For instance, this invention considers the practice of knowledge regardless of where it occurs. Most practice is conducted in a business setting where there are real consequences, risks and opportunities with hard indicators of success and failure or approval and disapproval.
[007] A fourth embodiment of this invention provides a timeline of knowledge, projects, people, and positions of responsibility while being based on objective data and trust.
[008] For the purposes of this invention, knowledge credibility is a rating within a field or domain-of-knowledge based on the following: Time expending effort within said domain. The depth of knowledge within said domain; the activity they’ve conducted with other individuals that are experts within said domain; The activity they’ve conducted with entities such as companies, or government offices that are experts within said domain; the value from exchange of a scarce resource caused by their expertise, or for their expertise within said domain; the acknowledgement from other individuals who are considered experts within said domain; Hie uniqueness and difficulty of their accomplishments within a said domain; and finally their positions of responsibility within a said domain.
[009] To prevent unnecessary ambiguity, we’ve provided context for other important terms of this document at the end of the spec.
BRI EF DESCRI PTION OF DRAWINGS
[0010] Fig. 0 is a map of the symbology used for the drawings.
[0011] Fig. 1 is a diagram of an example computing system in which the claims of this invention may be implemented but tlie invention may be used by electronic systems with more advanced features. [0012] Fig 2/1 is an illustration of the system components of the invention when a 3rd party application is running on the users computing device.
[0013] Fig. 2/2 is an illustration of the system components relationship when a 3rd party application is a virtual application.
[0014] Fig. 3/1 is an example of the visual display of this invention either in a browser or an application.
[0015] Fig. 3/2 is an example display of selectable options that are used for calculations of this invention.
[0016] Fig. 3/3 is an example display of the data selection used in calculations for this invention.
[0017] Fig. 3/4 is another example display of the data selection used in calculations for this invention.
[0018] Fig. 4 is a background/daemon application on the user’s machine that records and uploads data used by this invention for analysis.
[0019] Fig. 5 displays the server component of the invention.
[0020] Fig. 6 is a diagram of an interface such as an API that communicates metrics data from external software such as from third party companies.
[0021] Fig. 7 is a diagram of an application that calculates and displays Fig. 3/1.
[0022] Fig. 8 is a diagram of the domain identification or classification process that displays how
Al, Human in the Loop HITL, or a 3rd party API could be used, or bypassed if it is already provided by a trusted source.
[0023] Fig. 9 is a diagram of a timeline indicating an individual’ s/observed-user s professional career, and key data that demonstrates their capabilities.
DETAI LED DESCRI PTION OF TH E I NVENTION
[0024] Exemplary embodiments are described with reference to the accompanying drawings. In the figures, the left-most digit(s) of a reference number identifies the figure in which the reference number first appears. Wherever convenient, the same reference numbers are used throughout the drawings to refer to the same or like parts. In the following detailed description, numerous specific details are set forth in order to provide a thorough understanding of the disclosed example embodiments. However, it will be understood by those skilled in the art that the principles of the example embodiments may be practiced without every specific detail. Well- known methods, procedures, and components have not been described in detail so as not to obscure the principles of the example embodiments. Unless explicitly stated, the example methods and processes described herein are neither constrained to a particular order or sequence nor constrained to a particular system configuration. Additionally, some of the described embodiments or elements thereof can occur or be performed (e.g., executed) simultaneously, at the same point in time, or concurrently. Reference will now be made in detail to tire disclosed embodiments, examples of which are illustrated in the accompanying drawings.
[0025] It is to be understood that both the general description and the detailed description are exemplary and explanatory only and are not restrictive of this disclosure. The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate several exemplary' embodiments and together with the description, serve to outline principles of the invention.
[0026] This disclosure may be described in the general context of any hardware capable of executing customized preloaded instructions such as, e.g.. computer-executable instructions for performing program modules. Program modules may include one or more of routines, programs, objects, variables, commands, scripts, functions, applications, components, data structures, and so forth, which may perform particular tasks or implement particular abstract data types. The disclosed embodiments may also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, program modules may be located in local and/or remote computer storage media including memory storage devices.
[0027] Tire knowledge on how to implement these embodiments/exemplars may be found in text-books and under-graduate courses available online, and also through web searches. With the explanations below, a graduate of software engineering would understand how to implement these as they are understood by someone who is familiar with the art.
[0028] Fig. 1 provides a generalized example of a computing system that is capable of the innovations described by the claims of this invention. The computing system is not intended to suggest a limitation of functionality. The invention described may be implemented in a wide range of computing systems including personal devices such as phones laptops and web servers.
[0029] Fig. 2/1 is an embodiment that displays an example of the system elements of this invention and the relationship with the other elements wherein a 3rd party application/software Fig. 2/1 (201) is interacting with this invention on the user’s personal computing device. This invention will operate with one to any number of personal computing devices. The UI (User Interface) (300) of the personal computing device would likely be either a browser or an application (700) depicted further in Fig. 7 (700). If it is a local software application running in the same personal computing device, then it may receive its data from local storage (404) and may receive data and calculations from remote cloud like services. If it is a browser, the preferred method is to receive the processed information from cloud like services. Cloud like services include a server system Fig. 2/1 (500), depicted further in Fig. 5 (500), and would process the infomration from raw data or organized data from the storage system (516). The following is the data needed from the 3rd party /external software Fig. 2/1 (200) such as a text processor, Software Development Tool, or any softw are that is related to a field or domain of knowledge that is run in the local environment. The third-party software would include its’ Vendor ID (406) and if needed a Project ID (407) as well as the source data (201). Depending on the purpose of the software, and its method of data capture, it would receive a Category of Trust rating (301) shown as an association. Importantly, the vendor should not control the Category of Trust that is assigned to it. Further data, as shown in the Software Interface Fig 6, as needed, would be provided.
[0030] Data from the 3rd party/extemal software Fig. 2/1 (201) is sent either to the cloud using the Software Interface API (600), or to a daemon/background application (400), further depicted in Fig. 4 (400) which then stores the data in local storage on the personal computing device (404). Tire daemon/background application may also synchronize data with the cloud when connected.
[0031] Fig. 2/1 further shows that the User Interface (300) may be displayed either in a browser, or as a part of an application (700).
[0032] Fig. 2/2 displays another embodiment of system elements where the 3rd party application Fig. 2/2 (221) is running in a virtual environment and is hosted in a 3rd party cloud (220). Users interact w ith the data virtually which is usually through a browser. The 3rd party virtual application/cloud (220) sends the necessary data to this invention's servers (500) that may operate in the same cloud. Here, the Category of Trust (301) is also shown as an association, and likewise would not be controlled by the 3rd party application in most conditions. Data, including the Vendor ID (406), the Project ID (407), the source data (201) would be included with all other needed data of the Software Interface (600) wftich is sent to the server (500). When access is desired by a user, they may observe the metrics through tire user interface (300) that would be provided by a browser.
[0033] For the remainder of the description, references to 3rd party software or application could be any embodiment where the software is external. The third party may be the same entity or another entity that is providing interaction with knowledge and where its’ data is sent to this invention.
[0034] For the purposes of clarity. There are multiple individuals that interact with this invention. Tire observed user is the individual whose knowledge credibility is displayed. The observing user or requesting user is the user w ho is viewing the display of the metrics. Other individuals are generally users who have interaction with the observed user within a domain of knowledge.
[0035] Fig. 3/1 (300) depicts an example of the UI (User Interface) for the results of a request based on the users ID Fig. 4 (405). This exemplar shows a series of rows containing gauges and information that explains an observed user or entities knowledge metrics and Knowledge Credibility within a Domain. At the top Fig. 3/1 (301) is an example that allows the observer/user to select one or more categories of information that is used in the analysis and calculations conducted per category. Categories are an enumeration that represents the concreteness, or trust that the data source is free of errors and related to the observed user, The number of categories may be beyond the four that are shown. The following four categories are an example of how data may be divided and used in a hierarchy-of-trust.
[0036] Category I of Fig. 3 (301) would be concrete information that is captured by software that can provide metrics directly due to its involvement in the practice or interaction within a knowledge domain. Such practice may be the use of an instrumented baseball bat and integrated camera’s where the user of the instrumented system is measured in performance and in numbers of hits vs. strikes, and time such as hours of practice. It may be from a learning system where the software observes the users interaction as they learn. These metrics can originate from 3rd party software such as Software Engineering Tools like an IDE where the software is used to write software. It may also, for example, be tools that are used to search for errors such as a peer review system. It may further be from a video chat system where a user is discussing a domain-of- knowlcdgc, and analyzes the discussion to understand a user’s interaction with another individual.
[0037] Category II of Fig. 3/1 (301) would be non-concrete information such as an acknowledgement from others, but where an observer, participant, record has a stake or loss in its reputation in the validity of its source connected to the data. An example would be a research paper where its authorship was not observed but has been published in a digital library such as IEEE or Psychology publications. In this category, there is an implantation of accountability for original authorship and accuracy of records, and agreement is shown, for example, by the number of citations and or peer reviews. These acknowledgements provide strong authority for knowledge credibility but could contain inaccuracies as to who was the original creator.
[0038] Category III of Fig. 3/1 (301) is a non-concrete source of information, where a rating is related to a scarce resource such as sales or captured business. Where the is greater opportunities for data manipulation or errors, but there is reputation gained and lost if the infonnation is inaccurate or the source is incorrect. An example of this may be where a manager has provided a review that contains the information used for knowledge credibility. It is far weaker than categories I & II but still holds stronger authority than if reported by the observed user themselves.
[0039] Category IV of Fig. 3/1 (301) is a weak source of information, where the rating comes from acknowledgement from others such as votes from a website or social media platform. There is little at risk for the rater when they give a false vote and frequently votes are paid for by the thousands. However, through volumes of votes in relation to its origin and accuracy provides no stronger of an authority than if reported by the observed user themselves. This would be a rating for a social media influencer.
[0040] In Fig. 3/1 (307) displays rows of gauges and data per domain. Rows, as shown, are configurable based upon the selected categories above. Shown is the configuration with categories I and II selected. The configuration of rows and columns are as necessary to display the domains, and the necessary values used in the knowledge credibility score. Each row displays the domain (302) relevant to the row of gauges/graphs and information. The domain is always shown, and we show physics for this row’s domain. (303) is the value of a scarce resource for that domain that the observed user or entity has interacted with or practiced. Value exists in all four categories. In the example, value is related to a time period e.g. a year, 3 weeks. 2 months but is optional. (304) is representative of effort in a measurement in time and is available in all 4 categories. Here we have shown a donut chart with the total time in hours and percentages given with the type of interaction. Shown is Practice, Study Time, Discuss, and Create. The third column at (305) is fortesting knowledge and displays points. Here is displayed the sum for all scores within the domain and it is shown in a donut chart with percentages towards the type of testing and learning. E.g. free recall vs. a standard multiple -choice test. The fourth column is (306) is an example of an optional column based upon the selected categories. For category I only it would not be displayed since it includes methods of capture that are not concrete. For categories II through IV it would represent the number of citations for category II or votes in category' III and IV. In the case that categories II and III are selected where both votes and citations arc visible, they would each be represented by their own distinct column. The fifth column is (308) Percentile, and (309) a link to more data. Percentile may also be displayed as a percentile or score. Refer to Fig. 3/2 (308b). an exploded view of the percentile, where a menu is provided allowing other entities (universities in this case) to be selected and where the user’s percentile Fig. 3/2 (308a) can be given in relation to that entity. In the case that an observed user attends the University of Texas at Austin; by selecting a different university in the menu (308b), the observed user's percentile (308a) is given in relation to the entity’ that is selected in the menu. Here is a demonstration that an apples-to-apples comparison can be made for individuals in the system in relation to the data that is captured rather than being restricted to only a single entity. As the observer/user selects new entities, the rating changes at Fig 3/2 (308a). ‘Entity’ is considered broadly and is shown by different universities or by a company such as ‘UpWork’ or even ‘overall’ as shown in the 4th row in Fig. 3/1 (308).
[0041] Finaly at (310) a percentage of the knowledge industry has requested and that an observed user has interacted with is given. This is discussed further in paragraph [0062] and is represented by Fig. 5 (514). Further at (310) are the companies that hired individuals with similar metrics discussed further with Fig. 5 (515).
[0042] One exemplar for use of the percentile is the display provided in Fig. 3/2(308a). This percentile discussion is around Fig. 5 (511) Percentile by Domain. Discussion and interaction data captures the knowledge within a domain between tire multiple individuals. To transparently show the relationship, it is displayed along with the understanding that individual A has interacted with top n% of individuals in said domain as shown in Fig. 3/4 (332) the other individuals name, and (338) their rating. Discussion, and interaction data alone, without calculations from other individuals, may be used. It is convenient and makes reliance on key individuals unnecessary. However, there are cases where information would be unlikely to be captured by this invention. An exemplar would be Robert Oppenheimer and the team that worked with him to end WWII. Individuals who had interacted with him, or were a part of that organization, would receive no knowledge credibility outside of their organization for the unique depth of knowledge they were capable of providing. This is discussed further in Percentile Calculation Strategies [0081],
[0043] For transparency purposes, details of the data that is used in a calculation is also shown in the user interface by clicking on a link labeled ‘More’. In Fig. 3/1 (309) the area of data used in calculations may be selected by a second menu Fig. 3/2 (309a). Tire displayed information would be shown, for example, in another page that provides a plurality of information related to that domain and the area that was collected. In tire example in Fig. 3/3 of the displayed data for Entities. Hie relevant domains to tire row in Fig 3/1 (309) would be provided as they are at (317). A table would display information about the related entities, knowledge, people, and projects within the selected domain. Each row would display information about each element. A preferred organization of a row would show, the relationship with the observed user (person A), and other relevant data so that the percentile and values are transparent. The columns preferably would be organized to show the entity name (312), the relation with the observed user (313). The time involved with the entity (314). The value from a scarce resource (315) or in this case ‘income’ that was exchanged for the knowledge. The sub-domains (316) that were involved with that entity. The entities rating (318) data provided for the rating of the entity. And a description for the entity in relation to the domain (319). Multiple rows of entities are displayed and could be scrolled through as shown at (320). Tire displayed information may be changed by selecting another area within a menu as shown at (311). Each area would display similar data to provide transparency and understanding of the percentile or score that an observed user is given per entity choice shown in Fig. 3/1 (308) and Fig. 3/2 (308a).
[0044] Another example of a details page is when ‘People’ is selected in Fig. 3/2 (309a) or Fig. 3/3 (311). Fig. 3/4 is the ‘people’ page that show the other individuals within a domain, or in other words “people 1 - n” that person A has interacted with. Tire layout is similar to Fig. 3/3. At Fig. 3/4 (331) is the other area’s selection menu. (332) is the individual's name. (333) the related domain. (334) the time in the domain. (335) the income over the duration. (336) the related subdomains. (337) The selected domain. (338) The individuals rating as shown or optionally may be a percentile. (339) the description or reason for the rating. (340) the rows with all individuals.
[0045] Tire background, or daemon application Fig. 4 (400) is an embodiment showing a user’s computing device, alternative embodiments may be as provided in a “virtual environment”. The daemon application captures, translates, and stores metrics from the data source (201) which may be a 3rd party application, for use by the invention’s other components as shown in Fig 2/1 and Fig. 2/2. The background application does not need a display and is called by 3rd party software when needed or may be running in an environment that contains the domain ID process (800) and allows the third-party software, containing the data source (201), to run inside of the environment (not shown). The background application stores outputs from the data source (201) through the use of a bridge interface (402), or alternatively may use the API software interface (600), that may be used to interface with a variety of purposes such as an IDE software development tool, text editing program, accounting software, drafting software and or learning software where the necessary variables are recorded. The data captured from the data source is then stored locally and or remotely so that it may be combined and used in the calculations for the scoring systems Fig. 5 ( 11) Percentile by Domain; (507, 508, 509, 510) Time, Points, Value, and Votes respectively; (514) Knowledge requested by industry comparison: (515) Users with equivalent metrics work entity: as well as the timeline (521). An optional bridge shown at Fig. 4 (402) is used to convert, properly format, 3rd party/extemal software, the data source (201), variables if necessary. The data is then checked to ensure its integrity at (401) by checking that it is correctly formatted, checking for proper vendor codes, or by a hashing algorithm and encrypted with a key. Tire data is then passed to the storage system (404) which may be a typical database, block-chain connected through an API, or simply storing tire variables in memory such as in a string, object code, byte code or even binary. The user’s unique id is stored (405). The third-party software vendor ID is stored (406). The project ID if any is stored (407). Tire total time of effort is recorded for this session is combined with the accrued effort time (408) and categorized as creating (408a), reading, observing, or listening (408b), discussing (408c), or practicing (408d). If any monetary value or value from a scarce resource has been exchanged it is combined with the accrued value (409) along with the type of value (410) e.g. dollars, bitcoin, EU for example. The Data Source (201) which may be a 3rd party app, provides the domain IDs through the Domain ID Process (800) which may be provided through an API, or through the 3rd party apps own processes. The domains are recorded for the session (411). Any people associations are stored (512). The category of data certainty/trust is stored (301). The description of the 3rd party software at (412), a score is stored if any (413), and acknowledgements such as votes (414). Additionally, If connected to a network, the data is synchronized (404) with remote data and sent through security which provides encryption and data assurance (403) through the network to the servers of the invention shown in Fig 5 (500).
[0046] Fig. 5 illustrates the server component of the invention and the elements that it communicates with. At Fig. 5 (500) is the server. The preferred implementation is in a cloud like environment where there would likely be multiple copies of the server running simultaneously. Implementations of this part of the invention would conduct the following: 1) Receiving data from 3rd party/extemal software systems or the damien/background application (400). 2) Processing the information so that it may be stored and retrieved efficiently at high access rates. And 3) Sending tire processed data to a browser or application for display, or sending the processed data for other uses.
[0047] Received data first enters through security (501). Received data includes: Data (519) from 3rd party Bank and Pay Systems (518) to retrieve transactions and aggregate the necessary monetary values directly from the trusted source; Source data (201) related to a domain of knowledge from the various sources, such as 3rd party' or external software systems; and requests sent to various endpoints. Received data for metrics (601) and bank systems (518), if necessary, is translated into usable data through a bridge Fig. 5 (502), checked for data-integrity to ensure it can be trusted (503), then synchronized (504) and stored in a data storage system (516) along with the category of data certainty/trust Fig. 3/1 (301).
[0048] Tire data storage system Fig. 5 (516) is preferably a database but could also be blockchain provided through an API to further provide data integrity, or it could simply storing CSV, string, object code, machine code, or ultimately binary data in memory. [0049] Requests (517) to the server endpoints must include the necessary information to retrieve it in a SQL query. A common request would be to see individual ‘A’s Knowledge Credibility. Such a call could come from a link, a QR-Code, a brow ser form, and or a web-crawl request. Responses would preferably be protected, if so chosen by tire observed user or for other purposes, and either return a message that the data is not available, or the user’s metric data Fig. 3/1 Fig 3/3 Fig 3/4, and Fig 9 depending on the request, authorization, and permission settings.
[0050] A browser or application (700) would display the information after the server (500) is called by the correct endpoint and providing the necessary' information to retrieve individuals/entities or groups and display them as shown in Fig. 3/1. The call containing the entity id or user id (405) is received by the server (500) and may optionally contain the category of data certainty/trust (301) to be used. If the category is not included, the system will use a default category. It may also contain the domain (411) the user is interested in or other data as needed.
[0051] Data is then retrieved from the storage system (516) preferably using a SQL request. It is organized by its catcgory/s of trust (301) and by each relevant domain (411) and contains calculations for interaction time (507) discussed further in paragraph [0053], It further contains: Calculations for Points (508) from testing; Calculations for Value (509) are from a common scarce resource such as the US Dollar and is an indicator that interactions are relevant to industry and in demand. Acknowledgements such as votes and or citations, if available by the category' selected, would be included in the calculation (510). Finally, a complex calculation is made to provide the percentile of the observed user/entity by domain (511) and is discussed further in paragraph [0081], ‘Knowledge requested by industry comparison' (514) is included as well as ‘Users with equivalent metrics work entity’ used to show how an observed user compares with other individuals who were hired by other companies (515) and is further discussed in paragraph [0056],
[0052] Interaction time Fig. 5 (507) comprises of the elements shown in Fig. 4 create (408a), study (reading, observing, listening) (408b), discuss (408c), and practice (408d). Create time (408a) is the activity of creating knowledge for the purpose of consumption by others and where the effort may be captured by a trusted system. Study time (408b) are passive and active actions similar to observing videos, reading, using flash cards taking tests or self-tests where the activity may be captured by a trusted system. Discuss time (408c) is the activity' of discussing information which may be in a lecture, a tutoring system, or in a business environment such as an engineering meeting, department meeting, or similar discussions w here the discussion may be captured by a trusted system.-Practice time (408d) is the activity of being actively engaged in working with information within a domain-of-knowledge and the activity may be captured by a trusted system.
[0053] Tire percentile by domain calculation Fig 5 (511) is a complex calculation that may displayed as it does in Fig 3/1 (308). Tire data would change as a user selected different entities Fig. 3/2 (308b). Upon a change, a possible implementation would make a request to the server for another calculation in relation to the new entity and a percentile in relation to the observed user would be displayed at Fig 3/2 (308a). The Percentile by Domain Process Fig. 5 (512) also considers the category of data certainty/tmst (301) in its calculation. It further depends on the calculation strategy (513) which is discussed in paragraph [0081], The Percentile by Domain Process includes a selection from time Fig. 5 (507), points (508), value (509), votes and or citations (510), percentiles from entities that an observed user has worked with (512a), their own knowledge and depth (512b), the associations of people (512c), the projects (512d). their significant achievements (512e), and positions of responsibility (512f).
[0054] Tire ‘Knowledge requested by industry comparison’ provides the percentage of the information that the observed user has interacted with and has an understanding of. Tire output is as shown in Fig. 3/1 (310) on the top row. Uris is discussed in depth in para [0059] and [0073]
[0055] The ‘Users with equivalent metrics work entity’ (515), finds similar individuals based on their metrics. Discovers the companies they were hired by or are working for, and provides this as output as shown in Fig 3/1 (310) on the bottom row. This is discussed in depth in para [0072],
[0056] Fig. 5 (520) browser or (700) application represents the users device where the preceding outputs would be displayed.
[0057] Fig. 6 is an exemplar of a software interface, API, for 3rd party or external software or communication through a bridge. 3rd party software, after passing testing and meeting standards for assurances of the accuracy and protection from tampering would receive a vendor number and the proper instructions to communicate with this invention. The vendor number along with an associated category of trust (301) would be stored in the database. The following is the preferred information included: The user id (405). vendor id (406), project id (407), total time of effort for this session (408) and is further identified as Create Time (408a), Reading, Observing or Listening time (408b), discussion time (408c), and practice time (408d). If it is a banking or payment software it may include the sum of a value (409) and the type of value (410). It would further use the domain process Fig. 8 (800) to provide the domain (411). The project description if any (412), a score if any (413), and acknowledgements from others such as votes, citations, an academic grade or score, and or a peer review if any (414). Further included would be any entity associations (512a), people associations (512c), and any positions of responsibility if available (512f) . The captured data would then be sent to the server Fig. 5 (500) or the local background application Fig. 4 (400). Data sent to the server would include processing for integrity assurance (401).
[0058] Fig. 7 (700) is an alternate embodiment that represents an application for displaying the metrics of this invention and is shown on a user’s device. The elements of this application may also exist as a part of another larger application with broader purposes. In the shown embodiment, the application communicates with local storage which may be a database) and local storage communicates with the background application (400). The application, may if connected to the internet, make a request to the server (500) with tire user id (405) and trust category (301). The server would then return the requested data, and calculations as requested either individually or at the same time. The calculations would include (514) ‘Knowledge requested by industry comparison’, (515) ‘Users with equivalent metrics work entity’, and (511) ‘Percentile by domain’. The calculations would include the previous mentioned calculations for the server: (512a) Entities calculation; (512b) Knowledge Calculations; (512c) People calculations; And (512d) Project calculations and are calculated per domain (411) and per category of data certainty /trust (301). Locally stored infomiation would include: the time calculation (507); points calculation (508); value calculation (509); and votes calculation (510) that are also conducted per domain (506) and per category (301). Tire server may also provide (521) Timeline information. These data would be formatted by the Application (700) and displayed in the user interface (300) in the preferred display similar to Fig.s 3/1, 3/2, 3/3 ,3/4 and 9. All communications to the server would be through security (701).
[0059] Fig. 8 depicts an exemplar of determining the knowledge domain from data that is provided from Fig. 8 (601) where data may be provided from multiple source types that are either classed or unclassed. Where class is referring to data being provided with a domain ID or it hasn’t. One exemplar of a classed system would be an instrumented baseball system that would include a bat, baseball, and a baseball field that could report the performance of a baseball player. In this case the data would be preformatted with domain specific infomiation because of its relationship directly with the performance of a baseball player and thus their knowledge and experience. Another exemplar of classed information may be from a leaming/study application where the domain ID is provided from an institution or by multiple individuals who are unrelated but in agreement. The Category of Trust is provided outside of the data based on the vendor ID. An exemplar of unclassed information would be data from text that is an output from a video-chat conversation, or from text that is written from an observed user. If needed, a bridge (502) would serve to convert data from the source to the properly fonnatted data needed for this invention. API’s provided by this invention, where they are used by 3rd parties, would also serve to correctly format data for use by this invention. The two cases for the data that comes from inputs is classed or unclassed. If it has a trusted domain identification that is provided already then it “is classed” (801), or it is not. In the case that the domain ID is provided and is trusted, the domain ID is considered “classed” (803) and is stored (411). In the case that the domain ID is not provided, or is not trusted, then the domain ID is considered unclassed (802) and the domain ID may be derived through an internally provided Al content classifier (804); a “Human in the Loop” or HITL, or through an API provided by a commercial Al company. After the domain ID has been properly identified it is then stored at (411). An indicator may also be included that may indicate the level of certainty that the domain has been correctly provided. If Al is used, and depending on the model’s accuracy because of its implementation, it may. for example, provide that the model is .98 out of 1.0 confident of the correct domain ID. Data that is provided may not be limited to only 1 domain. For example if text data is provided and if the subject is physics, there may be several domains that are relevant such as Particle-Physics, Nuclear-Physics, and Calculus.
[0060] Depth of knowledge, for the purposes of this invention, is depth within a domain. It is not the DDK framework. Depth of knowledge would be discovered through the same process of identification as the domain ID discussed in paragraph [0060], An example, the identification of an individual’s understanding of physics could be shown through the multiple levels of dependent parent domains. Depth of knowledge in physics may be shown by time value, and depth in multiple sub-domains and the depth within them such as for classical mechanics and momentum, along with the required algebra to solve for problems. In the exemplar display of metrics Fig. 3/1 (300), Software Engineering is given an amount of time, a value, and or scoring. The subdomains, such as "Software Architecture’ are also displayed with a row and ‘Data Structures’ would also have a row. Each subdomain includes time spent in that domain and a link to the supporting data. The metrics of parent domains do not need to be a sum of the subdomains. Rather they are initially mapped as a key value association. During a period of time, while the data is collected, each increment in value is added to the domains and subdomains that are related. Another example, not shown, is Marketing. Where it’s subdomains such as Digital Marketing would have a row. Digital marketing is a parent to Search Engine Optimization and if the observed user had effort, its metrics would be displayed similarly depending on the Category of Trust the Observer has selected.
[0061] Knowledge needed by industry Fig. 3/1 (310) is derived from individuals who are associated as working for an industry’. A comparison is made between the observed user who is the target subject of a query and other individuals who are similar to the target. The returned data would include the industry and companies that the similar individuals are working for and a percentage of the knowledge the individuals have. Alternatively, data may be collected from job postings and a comparison may be made with the knowledge of the target subject and the knowledge requested in tire job postings.
[0062] Achievement or accomplishment, selection of the key events may be through the aggregation of data that points to the events and the individuals who are connected through it. The same methods as explained in Fig. 8 could be used to identify or validate these dates and events if they are provided by the observed user or another individual. Where the aggregation is sufficient enough to provide evidence that it occurred. Thus, it may not be a category I since it is not observed but reported by a category II or III data and would be displayed based upon the selected categories of trust. Achievement or accomplishment, in this exemplar, would be stored in the database and associated with tire users unique ID, a date, and other relevant columns for description and data as needed. Achievement or accomplishment may be shown similar to the data in the tables Fig.s 2/3 and 2/4 and in the Timeline Fig 9.
[0063] Positions of responsibility may be entered similarly through the use of a webform entry by a user that may have restricted access to tire observed user's employer, through web-scraping, through datamining. or through an automated system for example where the employer is connected to a 3rd party data mediator. Positions of responsibility, in this exemplar would be stored in the database and associated with the users unique ID, a start date, and an end date. Positions of responsibility may be shown similar to the data in the tables Fig.s 2/3 and 2/4 and in the timeline Fig 9.
[0064] Fig. 9 is an exemplar of the display of a timeline. It provides a chronology of an observed user or entities interaction with knowledge and connects tire dots with a visual display. The exemplar is not meant to limit a timeline, and it may be represented with or without graphical lines or may be text only. A timeline may be any chronological representation. The exemplar displays the company or entity (901), dates in reverse order (902). It associates other key information such as positions of responsibility and significant contributions such as key projects, and awards (903). The timeline further displays others that were associated with tire observed user and their significance (904). Hyperlinks to human readable tables such as shown in Fig.s 3/1. 3/3 and 3/4 may be displayed in the timeline giving the observer the ability to quickly understand the underlying data (905). The implementation of the timeline can be accomplished by making a query to the database for the appropriate data. Fig. 5 (521) Timeline calculation organizes the data in the desired chronological order. Tire data is retrieved from tire DB in a SQL query. The query would request the desired data and organizing it by entity, the observed users unique ID, and the start and end dates they had interaction with the entity.
[0065] Data-Integrity Fig. 5 (503) is common and may be accomplished through hashing algorithms of the data and sending the hash using encryption along with tire data. This ensures that the data is sent from a trusted source, and that it was not changed between its source and the server.
[0066] Data Synchronization Fig. 5 (504) ensures that data stored is correct at all points. Provides the necessary actions to store the information in database tables so it may be efficiently stored and easily retrieved according to the needs of this invention.
[0067] A search by a specific domain or sub-domain may be conducted on this platform using an implementation where the observed users were scored based upon the metrics and methods of this invention. This could be through a number of implementations such as a webform or an options list that connects to the webserver. The server would conduct a search using a SQL query to the database, or through a software program specifically developed for this purpose, or through Al.
[0068] An enumerated list of observed users and entities may be created based on the calculations and methods of this invention that may be transmitted to other entities that would be interested in reporting ratings of observed users such as web-based search engines such as Google or Bing.
[0069] Observed users would have the option to elect what data is available to be viewed by an observer user or if they may be viewable at all. Further they may elect to remove all data that is related to them.
[0070] Calculations: Tire following provides examples for the calculations required by this invention. They are not intended to limit this invention to these methods of calculations but to provide a working example to someone reasonably familiar with software engineering and computer science concepts.
[0071] Tire “users with equivalent metrics work at xxxx” Fig. 3/1 (310) and calculated in tire server, Fig. 5 (515), are derived from the SQL database with a reference to the observed user, the domains that are associated with observed user, as well as their ratings derived from the other calculations of this invention. Tire SQL tables would further associate the observed user with the companies they have worked for. A comparison of individuals who are currently employed by well-respected companies and the observed-user would be made. This embodiment would derive the domains of the observed-user and base calculations on various aspects of critical knowledge domains. Depending on the category’ of trust that is selected, for example, for Category’ I, only information that is know n to be free of errors, it would include individuals with similar knowledge domains, depth of knowledge, and a minimum time in practice, study, discussion, and creating within these domains. If the user selects a lower/weaker category of trust, the data that is used forthat category is included in the calculation. Another calculation strategy may be to rate individuals according to the category of trust within relevant domains and compare the observed user with other individuals who share a similar percentile, then return the companies with the largest earnings and scale that has a relevant minimum number of individuals that are similar.
[0072] The ‘Knowledge requested by industry’, shown in Fig. 3/1 (310) and calculated in the server, Fig. 5 (514), is derived from job requests and an association with knowledge, time and the metrics that are in a category of trust. Some skillsets may require an individual who may be a social media marketer to have demonstrated experience as a social media influencer. Tire skills needed may be video editing, sound editing, understanding of cameras and lighting, an understanding of communications using appropriate language for the audience, and an understanding of video and image composition to name a few. The comparison of the requested skills and experience in industry- versus the observed user’s skills and experience is shown. Tire calculation can be accomplished using SQL where job postings have been scanned from websites like Linkedln, GlassDoor or uploaded to this invention directly through queries made by hiring managers for example. Normalization of required knowledge could be accomplished through data structures that provide a one-to-many and many-to-one relationship such as provided by a hashtable with a linked list that may be built through the use of Machine Learning where models are trained on large data-sets of job postings. As a job posting is uploaded to storage, the skills are compared to required knowledge for those skills. Tire nodes in the linked-list contain a variable for experience as time in months. When the request for the observed user is made, a comparison can be made by requesting the skill needed by industry, and time with the observed user’s knowledge and time. A percentage or measurement is returned and displayed.
[0073] The value of knowledge Fig. 3 (303) and Fig. 5 (509) is an important metric because it helps to demonstrate the significance of the observed user and tire knowledge. Included is a time metric since a value is difficult to understand without a time metric. The value may come from multiple sources such as bank account deposits from an employer or a contractee that the observed user is working with. The value may come through the API Fig. 6 (600) provided to applications that, for example, allow the sale of study materials or course videos. The values along with a time stamp would be stored in a database table along with the source and if available the domain-of-knowledge. When the observed user’s metrics are displayed, a query' for the indicated time frame, the amount, and the domain, is requested from the observed user’s unique id. The query would return one line for each domain with the sum from the combined amounts under that domain.
[0074] Tire hours of effort Fig. 3 (304) and fig. 5 (507) is an important metric because it represents the total number of captured hours of effort within a domain-of-knowledge. It can be a positive indicator of an observed user’s dedication and work ethic. The metric of time is broken into sub-elements and in this example, they are study, create, discuss, and practice and there is a total time. The preferred method for calculating time is to sum the total time along with a SQL query to the same database as the previous queries. The reported hours would be included as an entry from devices that connected through an API or directly through the users on-board app that may be running in the background. Tire query would sum the time in each sub-metric and then the total sum may be conducted in the query or in the presentation software. The query would request the time by domain and by the unique ID of the observed user.
[0075] The points from study, fig. 3 (305) and included in fig. 5 (508) is an important metric because it indicates an outcome from work and learning within a domain-of-knowledge. This is also an observable indication of understanding and would be an important metric for new entrants to the work force. Similar to the previous metrics, the preferred calculation, data-storage, and retrieval. In the example we see that the sub-metrics are related to the types of questions and retrieval. More points may be awarded that provides a preference to the use of logic and through the use of different memory types. Thus the two sub-metrics are testing and free-recall, but others may also be included. Similar to previous queries, the preferred method would be to store information from study applications and advanced learning systems that are connected through the use of an API or that may embed software that would store tire information directly into the Database. Hie query based on the users ID would return the information by its sub metrics as a sum of each domain, and then sum the totals as the overall score.
[0076] The optional section for citations, votes, etc. . . fig. 3 (306) and fig. 5 (510) are displayed based upon the categories of trust. The information may be introduced by the observed user, captured and validated through a scan of websites such as the IEEE or other professional publication, or reported directly from publications through the API. Depending on the credibility of the publication and the cost to reputation and or economic cost of inaccurate reports, the category of trust would be determined. For example, the website may report a citation request as a citation, this would be a poor and inaccurate method. A superior form of citation is from a known source where the citation is reported and can be accounted for separately. E.G. another publication cited report xxxxx article from the observed user. These reports arc considered class II or lower since they are not a direct observation that the user has knowledge. Rather, they are an acknowledgement.
[0077] Similarly, data from other forms of confirmation of effort and knowledge may come from other sources. E.g. from Facebook as a vote, or as a follower. Social Media's business model does not reward factual information, it rewards a growth of users, and usage since advertisers need more views and engagement. These types of confirmation are not believable except through abstraction such as that the observed user has followers and engagement. This type of understanding is an important metric to some domains such as marketing and communications. Although the metrics can be faked, thus they are not believed to be free of errors. Similar to tire previous calculations, the data would be captured through an API provided from this invention, or by providing direct access to store memory from the software that is providing the information. For instance, Linkedln may have direct access to the databases referred to earlier. A publication such as IEEE may access through the API or is scanned by software for this invention.
[0078] Tire preferred method for Categories of Trust is an assignment that is associated with the Originating Source. E.G. if the originating source is third party software, and is directly connected through an API, and the originating source observes the user creating or editing, discussing, or practicing or studying and testing in a domain-of-knowledge, then it would be category I since it is unquestionably from a user and is unquestionably in a domain-of- knowledge. All un-observed creation of data is not class I. Such examples would be from a repository of information such as a previously created document that is stored in a database. The entry could be provided by an individual or through software that is written to determine the category based on a question fonn provided to the administrator for the third-party software.
[0079] The ‘Knowledge requested by industry comparison’ Fig. 5 (514) is collected from industry users as they search for users who possess certain knowledge. It is also collected by a voting system, through direct questioning, surveys, and by a comparison of individuals in similar job positions. This data is then stored for later use to calculate and show tire amount of knowledge that the user has interacted with that is relevant to a particular industry. Tire observed user’s data is then compared to a standard of the individuals that tire requester-user sets. The SQL request returns a comparison value against the knowledge requested by the individual. A generic, or default average may also be used.
[0080] Percentile calculation strategies:
[0081] There are a few7 strategies that may be used. Multiple calculations are made based on a plurality of data from a plurality of sources and dependent on the selected category-of-trust Fig. 3/1 (301). Hie Percentile by Domain Process is calculated from time Fig. 5 (507). points (508), value or significance (509), Votes/Citations if available ( 10), entities (512a) that the observed user has worked with or for; their knowledge (512b) metrics, depth of knowledge, and rarity; the people (512c) they have interacted with and their score or percentile; the projects (512d) they have been involved with; their achievements (512e), and their positions of responsibility (512f).
[0082] The knowledge metrics may be derived from multiple relevant sub-domains and their sub-metrics. Hours is divided into the sub-metrics as described in this invention and shown in Fig 3/1 and calculated in Fig 5.
[0083] If an Al is used it may report its accuracy and similarly if all categories are used in the calculation, it may report the possible error rate such as 5% error. The error may be calculated by the entirety of a category of trust. If Category 4 is used, an unobserved category, and it is unknown if the data contains errors. E.g. if the votes on a Linkedln post were paid for, the votes would be fake or considered erroneous. Tirus, the error rate would be by the amount that the votes contributed to the whole of the scoring or percentile.
[0084] Significance. E.g. Trinity for Oppenheimer and his team, set them apart from the rest of all other nuclear physicists. This may be a multiplier or an addition of a set number of points within the field. For most individuals, significance may be measured by value and may add a percentage of the value they've brought in comparison to the value of others. An exemplar would be for a top contributor a ‘ .1’ multiplier by the points in that domain.
[0085] Calculating knowledge credibility gained from interactions with other individuals. There are a few strategies that may be used.
[0086] Tire first strategy, the observed user gains from only the interactions they’ve made between the other individuals. Uris is explained in this invention and the observed user gains no real advantage except through the direct interactions with the other individual and is more of a pure calculation.
[0087] Tire second strategy, the observed user gains a fraction from individuals they interact with, within a domain of knowledge. Fig. 5 (513) to use other individual’s percentile as a part of person-A's positioning would be: Step 1. Restrict the calculation to only persons within the domain. Step 2. Ensure there are interactions related to the domain. Step 3. Calculate all dependent areas. Step 4. Score persons 1 - n based on dependent information. Step 5. Provide some percentage of the top individuals score to those they are connected with. Step 6. Then recalculate step 5. By providing a small percentage from each higher person. If person A attended a top university and was taught by top professors, they would gain percentages from these position scorings. Next if individual A worked for a top organization, they would gain further points from that organization. As they worked with new individuals who are top individuals in that domain, they gain further points. In this strategy, the points gained from each entity and individual could be a small fraction. Overtime, due to tire number of top people and entities they’ve interacted with, these percentages would add up to a significant points gain. Strategy 1 works because the discussions would capture interaction with knowledge. Strategy 2 requires tuning the percentage gains to prevent wild or unrealistic gains but allows for interactions that are not captured by hardware or software. In the case that person B is J.R. Oppenheimer on the Manhattan Project. The conversation and effort may not be captured, but the relationship between person A and B could still be shown.
[0088] This calculation can be performed once without much difficulty , but problems begin when there are multiple people, and the scoring/positioning calculations must be repeated. Since person A's score depends on person B and C. and person B relies on A and C, and C relies on both A and B. etc... .
[0089] Strategy 2 creates a problem. This is a complex calculation because the scoring depends on other individuals who also depend on each other. A naive implementation, even for a small number of individuals, would cause a thread lock problem. Advanced techniques to block, for instance an individual’s (person A) scoring that is dependent on other individuals (persons 1 - n) scoring. Persons 1 - n are dependent upon person A’s scoring. Thus persons 1 - n would need to be completed either with or without person A’s data and may be completed asynchronously using special libraries such as provided in Java by Rice University' by the PCDP library. To make this calculation, strategies would block, remove, or delay ‘person A’ from being scored until all other person’s 1 - n arc completed. Note the problem here is that persons 1 - n arc also person A. A solution for this is Pascal's triangle using a recursive task and implementing Java Futures and Memoization. Memoization stores temporary results needed per calculation and provides locks and releases. Futures organizes and provides a mechanism to coordinate the complex calculation efficiently' by performing a partial calculation, then stopping and waiting until a thread is called sometime in tire future, when the needed data for the calculation is completed. The knowledge to implement this is explained in more detail by courses in multi -core parallel-processing and specifically by a course available online through Rice EDU. This method is designed to quickly process large amounts of data efficiently, and without errors.
[0090] Definitions: The following are to provide context and clarity.
[0091] Knowledge credibility is a measurement of the probability of correctness or infallibility of an individual, an organization of individuals (such as those at a company), or software (such as Al) when they provide a statement, make a decision, or deliver work within a domain-of- knowledge. An individual may be said to have skills, and the individual may have knowledge in order to have the skills, but skills and knowledge are not the same. A professional mountain-bike racer may have the ability to race down a rocky and difficult path at very steep inclines but not possess the ability to explain how force is applied to maintain stability and direction. Likewise, expertise and talent may refer to the application to do work without understanding the underlying information needed for such work. They refer to the ‘about' rather than ‘how'. Example, a software developer may have the ability to write software and use API’s to develop a product but not possess the understanding of how processors work, how data may be manipulated using advanced data structures, nor how information is stored in memory. The developer may be considered an expert and to possess skills, but not possess a deeper theoretical understanding.
[0092] A domain is a division, sub-division/sub-domain, or section of a sub-division within an area of knowledge, practice, concentration, field, or other area of knowledge as recognized by academia or industry. It refers to both STEM and non-STEM fields and would be recognized in industry or academia as such. Some examples would be Mathematics to Calculus to Multivariable division, Law to Divorce Law, Marketing to SEO Optimization, or Physics to Quantum Mechanics. Etc... .
[0093] Interaction is the activity of creating, reading, observing, listening, or discussing, and practicing.
[0094] Practice is the activity of being actively engaged within a domain of interest such as practicing business or conducting research and where it is a captured activity by software or other means that is related to a category. Examples of practicing is solving problems, writing software, marketing, communicating, banking, and other practices that are related to STEM (science, technology, engineering, and mathematics) and non-STEM but in the day-to-day conduct of one’s profession or in solving problems such as may be done in academia.
[0095] Study time represents where a user has actively studied the domain be it through reading, observing, listening, or studying for example using a flashcard like system, and where the activity can be captured and represented by its category.
[0096] Discuss represents discussions and may be time spent in conversation or videochat or similar systems where the discussion may be recorded by a category of capture.
[0097] Create is where the user is actively creating knowledge for consumption by others within the domain-of-knowledge.
[0098] Percentile displays the user’s related percentile within a domain-of-knowledge and related to an entity such as a university or employer.
[0099] Observed is when the software that is connected to this invention, creates data for this invention while the user is 1) practicing, 2) creating, 3) discussing, 4) studying. [00100] Positional and positional data is ranking, hierarchal, and rating data that compares individuals with each other based on metrics or performance.

Claims

Claims
1. A computer-implemented method comprising of: a. a hierarchy-of-trust for original-source information segmented by its reliability to be free of incorrect data; b. providing an understanding of an individual’s interaction within a domain-of-knowledge based on the said hierarchy-of-trust; i. wherein said understanding originates from a plurality of sources such as external data-mining searches, and 3rd party softw are, or hardware which have a plurality of purposes that are categorized by said hierarchy-of-trust;
1 . wherein said originating sources may be from data that is collected actively or from existing sources; ii. wherein the said interaction, and understanding is related to a unique identifier such as a user-id, associating the active or previous data to an individual/user or entity such as artificial intelligence, company, or university; and c. a display, or communicating the said interaction, and understanding.
2. The method of claim 1, wherein said interaction comprises of one or more of the following: a. an association of the said individual or said entity with other entities such as an artificial intelligence, business, government, or academic entity; b. an association of the said individual or said entity with other individuals; c. an association of tire said individual with positions of responsibility; d. an association of the said individual with projects; e. an association of the said individual with achievements; f. an association of the said individual with studies, w orks, and testing; g. where the above associations may be included in a timeline, and a display of the timeline; and h. where the above may be associated with a value of a scarce resource.
3. The said other entities of claim 2, wherein positioning or scoring criteria comprises of expertise, a value of a scarce resource, size, number of product offerings, intellectual property, research and development accomplishment, and/or through this invention.
4. Tire said other individuals of claim 2, wherein positioning or scoring criteria comprises of their expertise, skill, talent within said domain, or through this invention.
5. The method of claim 1, wherein the said plurality of purposes comprises of a variation of software or hardware tools that are used by various industries and would be considered to be used for the practice, discussion, creation of data, or study of or for a domain-of-knowledge.
6. Tire method of claim 1 wherein the understanding may be used to provide positioning or scoring of a plurality of individuals or entities based on the said interaction within a domain-of- knowledge and the said hierarchy-of-trust; wherein: a. the positioning or scoring is from sub-metrics based on its depth in dependency on parent domains (plural)-of-knowledge; b. the method of positioning or scoring may report its accuracy based on the probability of errors; and c. the sub-metric may include a measurement of the individual’s contribution within the field or domain-of-knowledge.
7. The method of claim 1 wherein said understanding may relate to the needed/requested understanding by an industry.
8. Tire method of claim 1 wherein the understanding may be used to compare an individual with other individuals in relation to an entity; a. Wherein the observer-user may select different entities, and the invention will report the observed-user’s positioning or scoring in relation to individuals of the selected entity.
9. The method of claim 1, wherein the understanding comprises of a rating of an individual or entity within a knowledge domain based one or more of: a. tire time expending effort within said domain; b. the depth of knowledge they have interacted with, within said domain; c. the activity they’ve conducted including with other individuals that are experts within said domain; d. the activity they’ve conducted including with entities such as companies that are experts within said domain; e. tire value from exchange of a scarce resource caused by their expertise, or for their expertise within said domain; f. the acknowledgement from other individuals who are considered experts within said domain: g. the uniqueness and difficulty of an achievement or accomplishment within said domain; h. the positions of responsibility within said domain; and i. wherein the above is rated by the said hierarchy -of-trust of claim 1.
10. The method of claim 10, wherein the said acknowledgment from others comprises of agreement by an academic score, citation, peer review, a simple social media vote where the action is recorded, and the source is rated by the said hierarchy -of-trust of claim 1.
11. Tire method of claim 1 wherein a subdomain, or a dependency on another subdomain or domains may also indicate depth of understanding within a said domain.
12. The method of claim 1 where it may further provide an apples-to-apples comparison based on knowledge credibility within a domain-of-knowledge and that may further relate to an entity such as a university, country, or state.
13. Tire method of claim 1 wherein it may display the understanding when called by various methods such as a link provided in a digital document, through an image such as a QR-code, through a search, or an Al inference for an individual, an entity, or a domain-of-knowledge.
14. A method of providing transparency of how the metrics were derived by displaying the underlying data that metrics were derived from: a. wherein the underlying data is a display provided in a webpage or an application such as the individuals or entities that the observed individual has worked for or with; and b. wherein the underlying data may include a breakdown of metrics such as value, votes, or citations would be provided.
15. A system comprising of one or more processors and non-transitory computer storage media storing instmctions that, when executed by the said processors, cause the said processors to perform operations comprising: a. receiving, storing, updating, and synchronizing (when more than one processor) the said data inputs of claims 1 - 14 from measuring devices and software; b. performing the calculations and associations of claims 1 - 14; c. causing the display or communicating the said understanding or said data of claims 1 - 14; and d. wherein the said processors may be in a custom computing device, a personal computing device or in multiple computing devices, such as web server devices such as what exists in the cloud.
PCT/US2025/012826 2024-01-29 2025-01-24 Method and system to calculate and display knowledge credibility values Pending WO2025165650A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US202463626282P 2024-01-29 2024-01-29
US63/626,282 2024-01-29

Publications (1)

Publication Number Publication Date
WO2025165650A1 true WO2025165650A1 (en) 2025-08-07

Family

ID=96591343

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2025/012826 Pending WO2025165650A1 (en) 2024-01-29 2025-01-24 Method and system to calculate and display knowledge credibility values

Country Status (1)

Country Link
WO (1) WO2025165650A1 (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070055656A1 (en) * 2005-08-01 2007-03-08 Semscript Ltd. Knowledge repository
US20110307435A1 (en) * 2010-05-14 2011-12-15 True Knowledge Ltd Extracting structured knowledge from unstructured text
US20140075004A1 (en) * 2012-08-29 2014-03-13 Dennis A. Van Dusen System And Method For Fuzzy Concept Mapping, Voting Ontology Crowd Sourcing, And Technology Prediction
US20190114360A1 (en) * 2017-10-13 2019-04-18 Kpmg Llp System and method for analysis of structured and unstructured data

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070055656A1 (en) * 2005-08-01 2007-03-08 Semscript Ltd. Knowledge repository
US20110307435A1 (en) * 2010-05-14 2011-12-15 True Knowledge Ltd Extracting structured knowledge from unstructured text
US20140075004A1 (en) * 2012-08-29 2014-03-13 Dennis A. Van Dusen System And Method For Fuzzy Concept Mapping, Voting Ontology Crowd Sourcing, And Technology Prediction
US20190114360A1 (en) * 2017-10-13 2019-04-18 Kpmg Llp System and method for analysis of structured and unstructured data

Similar Documents

Publication Publication Date Title
Noble et al. Cultural capital as an explanation of variation in participation in higher education
Maynard et al. Indicated truancy interventions: Effects on school attendance among chronic truant students
US20090319326A1 (en) Continuing professional development system for industrial human resources
US20100274636A1 (en) Systems and methods for personalized employee engagement
Alroomi et al. Analysis of cost-estimating competencies using criticality matrix and factor analysis
Jiang et al. Understanding inactive yet available assignees in GitHub
US10698904B1 (en) Apparatus and method for acquiring, managing, sharing, monitoring, analyzing and publishing web-based time series data
Al-Aubaidy et al. Assessment of underreporting factors on construction safety incidents in US construction projects
Fauzi et al. Examining the link between stress level and cybersecurity practices of hospital staff in Indonesia
US11805130B1 (en) Systems and methods for secured data aggregation via an aggregation database schema
Reali et al. Attitudes, barriers and facilitators of hospital pharmacists conducting practice‐based research: a systematic review
Prabowo Corruption and the curse of over-quantification
Bardhoshi et al. Psychometric analysis of the survey of perceived organizational support (SPOS-8) using 1,005 school counselors
US20220207484A1 (en) Training data generation techniques to capture entity-to-entity affinities
Yahya et al. Current auditor expertise and future relevance of innovative audit technology: evidence from Indonesia public sector auditor
Soja Reexamining critical success factors for enterprise system adoption in transition economies: Learning from Polish adopters
Skelton Job satisfaction and job embeddedness as predictors of manufacturing employee turnover intentions
Sarker et al. Exploring student predictive model that relies on institutional databases and open data instead of traditional questionnaires
Lee et al. Perceptions on data quality, use, and management following the adoption of tablet-based electronic health records: results from a Pre–post survey with district health officers in Ghana
Cai et al. An investigation of item calibration methods in multistage testing
WO2025165650A1 (en) Method and system to calculate and display knowledge credibility values
CN113706047B (en) Personal credit information authenticity assessment method and system based on blockchain technology
Etcuban et al. Development of an Alumni Database for a University
Billups Impact of Job Embeddedness on Turnover Intention: A Study of Frontline Production Leaders
Brandes Examining how UNUM Group can accelerate the adoption of DEVOPS capabilities through the use of Value Stream Mapping Methods: A case study

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 25748835

Country of ref document: EP

Kind code of ref document: A1