[go: up one dir, main page]

0% found this document useful (0 votes)
24 views19 pages

MP Vol4 Chap4 Annex11 Consultation Guideline en

Annex 11 outlines updated guidelines for the use of computerized systems in GMP-regulated activities, emphasizing the need for clarity in regulatory requirements and consistency across member states. Key principles include lifecycle management, quality risk management, data integrity, and the responsibilities of regulated users regarding system validation and training. The annex also details requirements for system functionality, supplier management, alarms, and qualification and validation processes to ensure product quality and patient safety.

Uploaded by

w36675
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
24 views19 pages

MP Vol4 Chap4 Annex11 Consultation Guideline en

Annex 11 outlines updated guidelines for the use of computerized systems in GMP-regulated activities, emphasizing the need for clarity in regulatory requirements and consistency across member states. Key principles include lifecycle management, quality risk management, data integrity, and the responsibilities of regulated users regarding system validation and training. The annex also details requirements for system functionality, supplier management, alarms, and qualification and validation processes to ensure product quality and patient safety.

Uploaded by

w36675
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 19

1 Annex 11: Computerised Systems

2 Reasons for changes: The GMP/GDP Inspectors Working Group and the PIC/S Committee jointly
3 recommended that the current version of Annex 11 on Computerised Systems, be revised to reflect
4 changes in regulatory and manufacturing environments. The revised guideline should clarify
5 requirements and expectations from regulatory authorities, and remove ambiguity and inconsistencies.

6 Document map

1. Scope

2. Principles

3. Pharmaceutical Quality System

4. Risk Management

5. Personnel and Training

6. System Requirements

7. Supplier and Service Management

8. Alarms

9. Qualification and Validation

10. Handling of Data

11. Identity and Access Management

12. Audit Trails

13. Electronic Signatures

14. Periodic Review

15. Security

16. Backup

17. Archiving

Glossary

Page 1 of 19
7 Introduction

8 With an ever-evolving IT landscape, increased use of cloud services, and introduction of new
9 technologies in computerised systems used in GMP activities, there is a growing need for updated
10 guidance on regulatory requirements, and for adopting a common approach between member states of
11 the European Union (EU) and the Pharmaceutical Inspection Co-operation Scheme (PIC/S). The
12 updated Annex 11 outlines the requirements for the use of computerised systems in GMP-regulated
13 activities, thereby ensuring product quality, patient safety and data integrity.

14 1. Scope

15 This annex applies to all types of computerised systems used in the manufacturing of medicinal products
16 and active substances.

17 2. Principles

18 2.1. Lifecycle management. Computerised systems should be validated before use and
19 maintained in a validated state throughout their lifecycle.

20 2.2. Quality Risk Management. Quality Risk Management (QRM) should be applied throughout
21 all lifecycle phases of a computerised system used in GMP activities. The approach should
22 consider the complexity of processes, the level and novelty of automation, and the impact
23 on product quality, patient safety and data integrity.

24 2.3. Alternative practices. Practices which constitute alternatives to the activities required in this
25 document may be used, if they have been proven and documented to provide the same or
26 higher level of control.

27 2.4. Data integrity. It is critically important that data captured, analysed and reported by systems
28 used in GMP activities are trustworthy. As defined by the ALCOA+ principles, data
29 integrity covers many topics including but not limited to requirements defined in the
30 sections Handling of Data, Identity and Access Management, Audit Trails, Electronic
31 Signatures, and Security.

32 2.5. System requirements. System requirements which describe the functionality the regulated
33 user has automated and is relying on when performing GMP activities, should be
34 documented and kept updated to fully reflect the implemented system and its intended use.
35 The requirements should serve as the very basis for system qualification and validation.

36 2.6. Outsourced activities. When using outsourced activities, the regulated user remains fully
37 responsible for adherence to the requirements included in this document, for maintaining
38 the evidence for it, and for providing it for regulatory review.

39 2.7. Security. Regulated users should keep updated about new security threats to GMP systems,
40 and measures to protect these should be implemented and improved in a timely manner,
41 where needed.

42 2.8. No risk increase. Where a computerised system replaces another system or a manual
43 operation, there should be no resultant decrease in product quality, patient safety or data
44 integrity. There should be no increase in the overall risk of the process.
Page 2 of 19
45 3. Pharmaceutical Quality System

46 3.1. Pharmaceutical quality system. A regulated user should implement a pharmaceutical quality
47 system (PQS), which covers all computerised systems used in GMP activities and personnel
48 involved with these. It should include all activities required in this document and in addition,
49 it should be ensured that:

50 i. All deviations occurring during validation or operation of computerised systems


51 are recorded and any significant deviations investigated with the objective of
52 determining the root cause and any impact on product quality, patient safety or data
53 integrity. Suitable corrective and preventive actions (CAPA) should be identified
54 and implemented, and the effectiveness of these should be verified.

55 ii. Any change to a computerised system including but not limited to its configuration,
56 its hardware and software components, and its platform and operating system, are
57 made in a controlled manner and in accordance with defined procedures. Any
58 significant change which may impact product quality, patient safety or data
59 integrity, should be subject to re-qualification and validation.

60 iii. Internal audits are planned, conducted, reported and followed up on to detect
61 procedural deviations and ensure product quality, patient safety and data integrity.

62 iv. Regular management reviews cover relevant performance indicators for the
63 computerised system and the process it is used in (quality metrics) and ensure that
64 adequate action is taken.

65 v. Senior management effectively oversee the state of control throughout the system
66 lifecycle, allocate appropriate resources, and implement a culture that promotes
67 data integrity, security and a timely and effective handling of deviations.

68 4. Risk Management

69 4.1. Lifecycle. Quality Risk Management (QRM) should be applied throughout the lifecycle of
70 a computerised system considering any possible impact on product quality, patient safety or
71 data integrity.

72 4.2. Identification and analysis. Risks associated with the use of computerised systems in GMP
73 activities should be identified and analysed according to an established procedure. Examples
74 of risk management methods and tools can be found in ICH Q9 (R1).

75 4.3. Appropriate validation. The validation strategy and effort should be determined based on
76 the intended use of the system and potential risks to product quality, patient safety and data
77 integrity.

78 4.4. Mitigation. Where applicable, risks associated with the use of computerised systems in
79 GMP activities should be mitigated and brought down to an acceptable level, if possible, by
80 modifying processes or system design. The outcome of the risk management process should
81 result in the choice of an appropriate computerised system architecture and functionality.

Page 3 of 19
82 4.5. Data integrity. Quality risk management principles should be used to assess the criticality
83 of data to product quality, patient safety and data integrity, the vulnerability of data to
84 deliberate or indeliberate alteration, deletion or loss, and the likelihood of detection of such
85 actions.

86 5. Personnel and Training

87 5.1. Cooperation. When conducting the activities required in this document, there should be,
88 where applicable, close cooperation between all relevant parties. This includes process
89 owner, system owner, users, subject matter experts (SME), QA, QP, the internal IT
90 department, vendors, and service providers.

91 5.2. Training. All parties involved with computerised systems used in GMP activities should
92 have adequate system specific training, and appropriate qualifications and experience,
93 corresponding to their assigned responsibilities, duties and access privileges.

94 6. System Requirements

95 6.1. GMP functionality. A regulated user should establish and approve a set of system
96 requirements (e.g. a User Requirements Specification, URS), which accurately describe the
97 functionality the regulated user has automated and is relying on when performing GMP
98 activities. This principle should be applied regardless of whether a system is developed in-
99 house, is a commercial off-the-shelf product, or is provided as-a-service, and independently
100 on whether it is developed following a linear or iterative software development process.

101 6.2. Extent and detail. The extent and detail of defined requirements should be commensurate
102 with the risk, complexity and novelty of a system, and the description should be sufficient
103 to support subsequent risk analysis, specification, design, purchase, configuration,
104 qualification and validation. It should include, but may not be limited to, operational,
105 functional, data integrity, technical, interface, performance, availability, security, and
106 regulatory requirements. Where relevant, requirements should include process maps and
107 data flow diagrams, and use cases may be applied.

108 6.3. Ownership. If a system is purchased or consists of software-as-a-service, a requirements


109 specification may be provided by the vendor. However, the regulated user should carefully
110 review and approve the document and consider whether the system fulfils GMP
111 requirements and company processes as is, or whether it should be configured or
112 customised. The regulated user should take ownership of the document covering the
113 implemented version of the system and formally approve and control it after making any
114 necessary changes.

115 6.4. Update. Requirements should be updated and maintained throughout the lifecycle of a
116 system to ensure that they continue to give a complete and accurate description of system
117 functionality as the system undergoes subsequent changes and customisations. Updated
118 requirements should form the very basis for qualification and validation of a system.

119 6.5. Traceability. Documented traceability between individual requirements, underlaying design
120 specifications and corresponding qualification and validation test cases should be
121 established and maintained. The use of effective tools to capture and hold requirements and
Page 4 of 19
122 facilitate the traceability is encouraged.

123 6.6. Configuration. It should be clear what functionality, if any, is modified or added by
124 configuration of a system. Options allowing configuration of system functionality should
125 be described in the requirements specification and the chosen configuration should be
126 documented in a controlled configuration specification.

127 7. Supplier and Service Management

128 7.1. Responsibility. When a regulated user is relying on a vendor’s qualification of a system used
129 in GMP activities, a service provider, or an internal IT department’s qualification and/or
130 operation of such system, this does not change the requirements put forth in this document.
131 The regulated user remains fully responsible for these activities based on the risk they
132 constitute on product quality, patient safety and data integrity.

133 7.2. Audit. When a regulated user is relying on a vendor’s or a service provider’s qualification
134 and/or operation of a system used in GMP activities, the regulated user should, according
135 to risk and system criticality, conduct an audit or a thorough assessment to determine the
136 adequacy of the vendor or service provider’s implemented procedures, the documentation
137 associated with the deliverables, and the potential to leverage these rather than repeating the
138 activities.

139 7.3. Oversight. When a regulated user is relying on a service provider’s or an internal IT
140 department’s operation of a system used in GMP activities, the regulated user should
141 exercise effective oversight of this according to defined service level agreements (SLA) and
142 key performance indicators (KPI) agreed with the service provider or the internal IT
143 department.

144 7.4. Documentation availability. When a regulated user relies on a vendor’s, a service provider’s
145 or an internal IT department’s qualification and/or operation of a system used in GMP
146 activities, the regulated user should ensure that documentation for activities required in this
147 document is accessible and can be explained from their facility. In this, the regulated user
148 may be supported by the vendor, the service provider or the internal IT department.

149 7.5. Contracts. When a regulated user is relying on a service provider’s or an internal IT
150 department’s qualification and/or operation of a system used in GMP activities, the
151 regulated user should have a contract with a service provider or have approved procedures
152 with an internal IT department which:

153 i. Describes the activities and documentation to be provided

154 ii. Establishes the company procedures and regulatory requirements to be met

155 iii. Agrees on regular, ad hoc and incident reporting and oversight (incl. SLAs and
156 KPIs), answer times, resolution times, etc.

157 iv. Agrees on conditions for supplier audits

158 v. Agrees on support during regulatory inspections, if so requested

Page 5 of 19
159 vi. Agrees on resolution of issues brought up during normal operation, audits and
160 regulatory inspections etc.

161 vii. Defines requirements and processes for communication of quality and security
162 related issues

163 viii. Defines an exit strategy by which the regulated user may retain control of system
164 data

165 ix. Agrees on the process for release of new system versions and on the regulated
166 user’s possibility to test these prior to release.

167 8. Alarms

168 8.1. Reliance on system. Alarms should be implemented in computerised systems where a
169 regulated user is relying on the system to notify about an event. This is required when the
170 user must take a specific action, without which product quality, patient safety or data
171 integrity might otherwise be compromised.

172 8.2. Settings. Alarm limits, delays, and any early warnings or alerts, should be appropriately
173 justified, and set within approved and validated process and product specifications. Setting,
174 changing or deactivation should only be available to users with appropriate access privileges
175 and should be managed by an approved procedure.

176 8.3. Signalling. Alarms should set off visible and/or audible signals when set alarm limits are
177 exceeded and after any defined delay. The signalling should accommodate a timely reaction
178 and should be appropriate to the work environment.

179 8.4. Acknowledgement. Critical alarms potentially impacting product quality, patient safety or
180 data integrity should only be acknowledged by users with appropriate access privileges. As
181 part of the acknowledgement, i.e. a confirmation that the alarm has been seen and
182 appropriate action will be taken, a comment should be added about why the alarm was
183 acknowledged (see 12 Audit Trails).

184 8.5. Log. All alarms and acknowledgements should be automatically added to an alarm log. This
185 should contain the name of the alarm, date and time of the alarm, date and time of the
186 acknowledgement, username and role of the user acknowledging the alarm and any
187 comment about why the alarm was acknowledged. It should not be possible for users
188 working according to GMP to deactivate or edit alarm logs.

189 8.6. Searchability and sortability. Alarm logs should be searchable and sortable in the
190 originating system, or it should be possible to export logs to a tool which provides this
191 functionality. Other methods of reviewing alarms may also be used, if they provide the same
192 effectiveness.

193 8.7. Review. Alarm logs should be subject to appropriate periodic reviews based on approved
194 procedures, in which it should be evaluated whether they have been timely acknowledged
195 by authorised users and whether appropriate action has been taken. Reviews should be
196 documented, and results should be evaluated to identify any trends that could indicate
197 negative performance of a system or process, or impact on the product. The frequency and
Page 6 of 19
198 detail of reviews should be based on the risk to product quality, patient safety and data
199 integrity.

200 9. Qualification and Validation

201 9.1. Principles. Qualification and validation activities for computerised systems should follow
202 the general principles outlined in GMP Annex 15. The activities should address both
203 standard and configured system functionality, as well as any functionality realised through
204 customisation.

205 9.2. Quality risk management. Computerised systems should be qualified and validated in
206 accordance with the principles of quality risk management. Decisions on the scope and
207 extent of qualification and validation of specific functionality and entire systems should be
208 based on a justified and documented risk assessment of individual requirements and, where
209 relevant, functional specifications, considering the risk for product quality, patient safety
210 and data integrity.

211 9.3. Installation and configuration. Prior to commencing any test activity, it should be verified
212 that a computerised system and its components have been correctly installed and configured
213 according to specifications, and where applicable, that relevant components have been
214 properly calibrated. Operating systems and platforms should be updated to supported
215 versions and relevant security patches should be deployed (see 15.10 Updated platforms and
216 15.13 Timely patching).

217 9.4. Evidence. System qualification and validation should provide evidence in the form of
218 executed test scripts, and where relevant, screen dumps, that requirements, and where
219 applicable, derived functional specifications, are met by the system.

220 9.5. Traceability. Test cases should be traceable to individual requirements or specifications,
221 e.g. by means of a requirements traceability matrix. Test cases not referring (traceable) to
222 requirements or applicable specifications do not meet the requirements to qualification and
223 validation.

224 9.6. Focus. Increased focus should be on testing a system’s handling of key functional
225 requirements, on functionality intended to ensure that activities are conducted according to
226 GMP, and on functionality designed to ensure data integrity. This includes but is not limited
227 to access privileges, release of products and results, calculations, audit trails, error handling,
228 handling of alarms and warnings, boundary and negative testing, reports and interfaces, and
229 restore from backup.

230 9.7. Plan and approval. Qualification and validation activities should be conducted according
231 to approved plans, protocols and test scripts. Test scripts should be described in sufficient
232 detail to ensure a correct and repeatable conduct of test steps and prerequisites.

233 9.8. Completion prior to use. Qualification and validation activities should be successfully
234 completed and reported prior to approval and taking a system into use. Conditional approval
235 to proceed to taking a system into use may be granted where certain acceptance criteria have
236 not been met, or deviations have not been fully addressed. A condition for this is, that there
237 is a documented assessment, that any deficiencies in the affected system functionality or
Page 7 of 19
238 GMP processes, will not impact product quality, patient safety or data integrity. Where a
239 conditional approval is issued, it should be explicitly stated in the validation report and there
240 should be close follow-up on approval of outstanding actions according to plan.

241 9.9. Authorisation. Qualification and validation documentation may be provided by a service
242 provider, a vendor or an internal IT department in parts or in whole. However, the regulated
243 user is fully accountable and should carefully review and authorise the use of the
244 documentation. They should carefully consider whether it covers the implemented version
245 and supports GMP, and company processes as is, or whether it should be repeated in parts
246 or completely by the regulated user.

247 10. Handling of Data

248 10.1. Input verification. Where critical data is entered manually, systems should, were applicable,
249 have functionality to verify the plausibility of the inputs (e.g. within expected ranges), and
250 alert the user when the input is not plausible.

251 10.2. Data transfer. Where a routine work process requires that critical data be transferred from
252 one system to another (e.g. from a laboratory instrument to a LIMS system), this should,
253 where possible, be based on validated interfaces rather than on manual transcriptions. If
254 critical data is transcribed manually, effective measures should be in place to ensure that
255 this does not introduce any risk to data integrity.

256 10.3. Data migration. Where an ad hoc process requires that critical data or a whole database be
257 migrated from one system to another (e.g. when moving data from a retired to a new
258 system), this should be based on a validated process. Among other things, it should consider
259 the constraints on the sending and receiving side.

260 10.4. Encryption. Where applicable, critical data should be encrypted on a system.

261 11. Identity and Access Management

262 11.1. Unique accounts. All users should have unique and personal accounts. The use of shared
263 accounts except for those limited to read-only access (no data or settings can be changed),
264 constitute a violation of data integrity.

265 11.2. Continuous management. User accesses and roles should be granted, modified and revoked
266 as relevant and in a timely manner as users join, change, and end their involvement in GMP
267 activities.

268 11.3. Certain identification. The method of authentication should identify users with a high
269 degree of certainty and provide an effective protection against unauthorised access.
270 Typically, it may involve a unique username and a password, although other methods
271 providing at least the same level of security may be employed (e.g. biometrics).
272 Authentication only by means of a token or a smart card is not sufficient, if this could be
273 used by another user.

274 11.4. Confidential passwords. Passwords and other means of authentication should be kept
275 confidential and protected from all other users, both at system and at a personal level.
276 Passwords received from e.g. a manager, or a system administrator should be changed at
Page 8 of 19
277 the first login, preferably required by the system.

278 11.5. Secure passwords. Passwords should be secure and enforced by systems. Password rules
279 should be commensurate with risks and consequences of unauthorised changes in systems
280 and data. For critical systems, passwords should be of sufficient length to effectively prevent
281 unauthorised access and contain a combination of uppercase, lowercase, numbers and
282 symbols. A password should not contain e.g. words that can be found in a dictionary, the
283 name of a person, a user id, product or organisation, and should be significantly different
284 from a previous password.

285 11.6. Strong authentication. Remote authentication on critical systems from outside controlled
286 perimeters, should include multifactor authentication (MFA).

287 11.7. Auto locking. Accounts should be automatically locked after a pre-defined number of
288 successive failed authentication attempts. Accounts should only be unlocked by the system
289 administrator after it has been confirmed that this was not part of an unauthorised login
290 attempt or after the risk for such attempt has been removed.

291 11.8. Inactivity logout. Systems should include an automatic inactivity logout, which logs out a
292 user after a defined period of inactivity. The user should not be able to change the inactivity
293 logout time (outside defined and acceptable limits) or deactivate the functionality. Upon
294 inactivity logout, a re-authentication should be required (e.g. password entry).

295 11.9. Access log. Systems should include an access log (separate, or as part of the audit trail)
296 which, for each login, automatically logs the username, user role (if possible, to choose
297 between several roles), the date and time for login, the date and time for logout (incl.
298 inactivity logout). The log should be sortable and searchable, or alternatively, it should be
299 possible to export the log to a tool which provides this functionality.

300 11.10. Guiding principles. Access privileges for users of computerised systems used in GMP
301 activities should be managed according to the following two guiding principles:

302 • Segregation of duties, i.e. that users who are involved in GMP activities do not have
303 administrative privileges.

304 • Least privilege principle, i.e. that users do not have higher access privileges than
305 what is necessary for their job function.

306 11.11. Recurrent reviews. User accounts should be subject to recurrent reviews where managers
307 confirm the continued access of their employees in order to detect accesses which should
308 have been changed or revoked during daily operation, but were accidentally forgotten. If
309 user accounts are managed by means of roles, these should be subject to the same kind of
310 reviews, where the accesses of roles are confirmed. The reviews should be documented, and
311 appropriate action taken. The frequency of these reviews should be commensurate with the
312 risks and consequences of changes in systems and data made by unauthorised individuals.

313 12. Audit Trails

314 12.1. Manual user interactions. Systems which are used to control processes, capture, hold or
315 report data, and where users can create, modify or delete data, settings or access privileges,
Page 9 of 19
316 acknowledge alarms or execute electronic signatures etc., should have an audit trail
317 functionality which automatically logs all manual user interactions.

318 12.2. Who, what, when, why. The audit trail should unambiguously capture the user who made a
319 change (including the user’s role, if users may have more than one role), what was changed
320 (including the data that was changed and the old and the new value), and the date and time
321 when the change was made (including the time zone if applicable). Audit trail data should
322 be recorded at the time of events, not at the end of a process. Where data is changed from
323 an old value to a new value, systems should automatically prompt the user for, and register
324 the reason, why the change was made.

325 12.3. No edit or deactivation. Audit trail functionality should be enabled and locked at all times,
326 and it should not be possible for any user to edit audit trail data. If audit trail settings or
327 system time can be changed, or if the functionality can be deactivated, this should by itself
328 create an entry in the audit trail, and it should only be possible for a system administrator
329 not involved in any GMP activities (see 11.10 Guiding principles).

330 12.4. Accommodate review. Systems should accommodate effective and efficient reviews of audit
331 trail data. It should be possible for all users to sort and search audit trail data (who, what,
332 when and why) in the system, or alternatively, to allow export of the data to a tool where
333 this is possible.

334 12.5. Reviews. Audit trail reviews should be conducted according to a documented procedure for
335 the specific system, or type of systems. The procedure should outline who should make the
336 review, what should be reviewed, and when should the review be made. The use of tools to
337 help conduct audit trail reviews is encouraged and appropriate action should be taken and
338 documented following the reviews. Any significant variation from the expected outcome
339 found during the audit trail review should be fully investigated and recorded.

340 12.6. Independent review. Audit trail reviews should be conducted by personnel not directly
341 involved in the activities covered by the review (a peer review).

342 12.7. Scope of review. Reviewing all entries in an audit trail record may not be effective. Reviews
343 should be targeted, based on risk and adapted to local manufacturing processes. Procedures
344 for audit trail reviews should focus on detecting any deliberate or indeliberate changes to
345 critical processes or data that indicate a violation of GMP principles, including, but not
346 limited to, repetition of activities, errors, omissions, unauthorised process deviations and
347 loss of data integrity. A key element should be to verify the reason why a change is made.

348 12.8. Timeliness of review. Audit trail reviews should be conducted in a timely manner according
349 to the risk of the process reviewed. The audit trail review should be conducted prior to batch
350 release, unless the risk of a later detection of any unwarranted changes can be justified.

351 12.9. Electronic copy. It should be possible to obtain a complete electronic copy of system data
352 including audit trail data. Flat and locked files are not acceptable, it should be possible to
353 search and sort data.

354 12.10. Availability to QP. Audit trail reviews with direct impact on the release of a product should
355 be available to the QP at the time of batch release.
Page 10 of 19
356 13. Electronic Signatures

357 13.1. Scope. Requirements for electronic signatures in this document apply to systems and tools
358 used in processes where GMP require a signature.

359 13.2. Open systems. Where the system owner does not have full control of system accesses (open
360 systems), or where required by other legislation, electronic signatures should, in addition,
361 meet applicable national and international requirements, such as trusted services.

362 13.3. Re-authentication. When executing an electronic signature, a system should enforce users
363 to perform a full re-authentication providing at least the same level of security as during
364 system login (see 11.3 Certain identification). When executing subsequent electronic
365 signatures in immediate sequence, authentication may be by means of a password or
366 biometrics only. Authentication only by means of a smart card, a pin code, or relying on the
367 previous system authentication is not acceptable.

368 13.4. Date and time. Systems should automatically log the date and time and, where applicable,
369 the time zone when an electronic signature was applied.

370 13.5. Meaning. It should be clear when a user is executing an electronic signature and where
371 applicable, systems should prompt the user for the meaning of the signature (e.g. reviewer
372 or approver).

373 13.6. Manifestation. When an electronic signature is displayed (on screen or print), the
374 manifestation should include the full name of the user, the username, where applicable the
375 role of the signer and the meaning of the signature, the date and time, and where needed the
376 time zone, when the signature was applied.

377 13.7. Indisputability. Electronic signatures should be indisputable and equivalent to hand-written
378 signatures.

379 13.8. Unbreakable link. Electronic signatures should be permanently linked to their respective
380 records. Controls should be in place to ensure that a signed record cannot be modified or
381 alternatively, that if a later change is made to a signed record, it will clearly appear as
382 unsigned.

383 13.9. Hybrid solution. If a wet-ink signature (on paper) is used to sign electronic records held in
384 a computerised system (a hybrid solution), measures should be implemented to provide a
385 high degree of certainty that any change to the electronic record will invalidate the signature.
386 This may be implemented by calculating a hash code (check sum) of the electronic record
387 and printing that on the signature page.

388 14. Periodic Reviews

389 14.1. Periodic reviews. After a system has been initially validated and is put into operation,
390 periodic reviews should be conducted. This review should verify whether the system
391 remains 'fit for intended use' and in 'a validated state', or whether changes should be made
392 and re-validation (complete or in parts) is required. The reviews should be documented and
393 findings analysed to identify any consequences on product quality, patient safety and data
394 integrity, and to prevent recurrence.
Page 11 of 19
395 14.2. Scope of review. Where applicable, periodic reviews should include, but may not be limited
396 to:

397 Changes made since the previous review:

398 i. To the system’s hardware and software components, configuration, platform,


399 infrastructure and interfaces.

400 ii. To the system documentation, e.g. requirements specifications, user guides and
401 SOPs. This includes a verification that system changes are fully reflected in the
402 system documentation

403 iii. The combined effect of multiple changes in this, and in other systems, should be
404 assessed. Undocumented (unapproved) changes should be effectively identified,
405 e.g. by means of configuration auditing.

406 Follow-up on supporting processes:

407 iv. Actions from previous periodic reviews, audits and inspections, and corrective and
408 preventive actions.

409 v. Conduct of, and actions from, audit trail reviews, access reviews, and risks
410 assessments.

411 vi. Actions from incidents, problems and deviations, security incidents and new
412 security threats.

413 vii. Maintenance, calibration, support contracts and service level agreements (SLA).

414 viii. Contracts and key performance indicators (KPI) with vendors and service
415 providers.

416 ix. Adequacy of backup procedures, restore tests and disaster recovery plans.

417 x. Adequacy and timeliness of archival.

418 xi. Conduct and actions from data integrity assessments.

419 xii. Changes to regulatory requirements.

420 14.3. Frequency. Periodic reviews should be conducted, approved and closed according to plan.
421 The frequency of reviews should be established and justified based on the risk the system
422 poses to product quality, patient safety and data integrity. A final review should be
423 conducted when the system is taken out of use.

424 15. Security

425 15.1. Security system. Regulated users should ensure an effective information security
426 management system is implemented and maintained, which safeguards authorised access
427 to, and detects and prevents unauthorised access to GMP, systems and data.

428 15.2. Continuous improvement. Regulated users should keep updated about new security threats,
Page 12 of 19
429 and measures to protect GMP systems and data should be continuously improved as
430 applicable to counter this development.

431 15.3. Training and tests. Regulated users should undergo recurrent security awareness training,
432 as relevant, to raise and maintain their understanding of cyber threats and safe behaviour.
433 The effectiveness of the training should be evaluated, e.g. by means of simulated tests.

434 15.4. Physical access. Servers, computers, devices, infrastructure and storage media used in GMP
435 activities should be physically protected against unauthorised access, damage and loss.
436 Physical access to server rooms and data centres should be limited to the necessary
437 minimum and these should be securely locked, e.g. by means of multi-factor authentication.
438 If unauthorised access is possible (e.g. `co-location´), access to individual servers should be
439 protected.

440 15.5. Disasters and disturbances. Data centres should be constructed to minimise the risk and
441 impact of natural and manmade disasters and disturbances. This includes, but may not be
442 limited to, storms, flooding, water leaks, earthquakes, fires, power outages, and network
443 failures etc.

444 15.6. Replication. Where relevant, critical data should be replicated from a primary to a secondary
445 data centre. The replication should take place automatically with a delay which is short
446 enough to minimise the risk of loss of data. The secondary (failover) data centre should be
447 located at a safe distance from the primary site to minimise the risk that the same incident
448 destroys both data centres.

449 15.7. Disaster recovery. A disaster recovery plan should be in place, tested and available during
450 and after a disaster has affected a data centre, server, computer, infrastructure, or data.
451 Where applicable, the plan should ensure the continuity of operation within a defined
452 Recovery Time Objective (RTO).

453 15.8. Segmentation and firewalls. Networks should be segmented, and effective firewalls
454 implemented to provide barriers between networks, and control incoming and outgoing
455 network traffic. Firewall rules (e.g. based on IP addresses, destinations, protocols,
456 applications, or ports) should be defined as strict as practically feasible, only allowing
457 necessary and permissible traffic.

458 15.9. Review of firewalls. Firewall rules should be periodically reviewed as the rules tend to be
459 changed or become insufficient over time (e.g. as ports are opened but never closed, or as
460 new cyber threats evolve). This review should ensure that firewalls continue to be set as
461 tight as possible.

462 15.10. Updated platforms. Operating systems and platforms for applications should be updated in
463 a timely manner according to vendor recommendations, to prevent their use in an
464 unsupported state.

465 15.11. Validation and migration. Validation of applications on updated operating systems and
466 platforms and migration of data should be planned and completed in due time prior to the
467 expiry of the vendor’s support.

Page 13 of 19
468 15.12. Unsupported platforms. Applications on operating systems and platforms, which are no
469 longer supported by vendors, and for which threats are no longer monitored and applicable
470 security patches released, are highly vulnerable and should be isolated from computer
471 networks and the internet.

472 15.13. Timely patching. While operating systems and platforms are under support, vendors
473 typically release security patches to counter identified vulnerabilities, some of which
474 (critical vulnerabilities) could otherwise be exploited to give unauthorised individuals
475 privileged access to systems and allow code execution (e.g. ransomware attacks). Hence,
476 relevant security patches released by vendors of operating systems and platforms should be
477 deployed in a timely manner according to vendor recommendations. For critical
478 vulnerabilities, this might be immediately.

479 15.14. Unpatched platforms. Applications on operating systems and platforms, which are not
480 security patched in a timely manner (critical patches) according to vendor recommendations
481 are highly vulnerable and constitute a major risk for loss of data integrity. Where relevant,
482 such systems should be isolated from computer networks and the internet.

483 15.15. Strict control. The use of bidirectional devices (e.g. USB) in servers and computers used in
484 GMP activities should be strictly controlled within the organisation.

485 15.16. Effective scan. If bidirectional devices (e.g. USB) may have been used outside the
486 organisation (e.g. privately), they may intentionally or unintentionally introduce malware
487 and cause code execution. Hence, they should not be used unless they have been effectively
488 scanned and found to be harmless, and not compromise system and data integrity.

489 15.17. Deactivated ports. Ports for bidirectional devices (e.g. USB) in critical servers and
490 computers should be deactivated by default, blocked or even removed, unless they are used
491 for devices necessary to operate the system (e.g. keyboard or mouse).

492 15.18. Anti-virus software. Anti-virus software should be installed and activated on systems used
493 in GMP activities, especially those interfacing the internet. The anti-virus software should
494 be continuously updated with the most recent virus definitions to identify, quarantine, and
495 remove known computer viruses. The effectiveness of the process should be monitored.

496 15.19. Penetration testing. For critical systems facing the internet, penetration testing (ethical
497 hacking) should be performed at regular intervals to evaluate the adequacy of security
498 measures taken, and to identify vulnerabilities in system security. This should include the
499 potential for unauthorised parties to gain access to and control the system and its data. The
500 effectiveness of the process should be verified and monitored. Vulnerabilities identified,
501 especially those related to a potential loss of data integrity, should be addressed and
502 mitigated in a timely manner.

503 15.20. Encryption. When remotely connecting to systems over the internet, a secure and encrypted
504 protocol should be used.

505 16. Backup

506 16.1. Regular backup. Data and metadata should be regularly backed up following established

Page 14 of 19
507 procedures to prevent the loss of data in case of accidental or deliberate change or deletion,
508 loss as the result of a malfunction or corruption, e.g. as the result of a cyber-attack.

509 16.2. Frequency and retention. The frequency, retention period and storage of backups is
510 critically important to the effectiveness of the process to mitigate the loss of data. Backups
511 should be made at suitable intervals (e.g. hourly, daily, weekly and monthly) and their
512 retention determined through a risk-based approach (e.g. correspondingly a week, a month,
513 a quarter, and years).

514 16.3. Physical separation. Backups should be physically separated from the server or computer
515 holding the original data and stored at a safe distance from this, to prevent that both would
516 be impacted by the same incident.

517 16.4. Logical separation. Backups should not be stored at the same logical network as the original
518 data to avoid simultaneous destruction or alteration.

519 16.5. Scope. Depending on the criticality and urgency for recovery after an incident, applications
520 and system configurations may also need to be backed up.

521 16.6. Restore test. Restore of data from backup should be tested and documented based on risk
522 during system validation and after changes are made to the backup or restore processes and
523 tools. Restore tests should be documented and include a verification that data is accessible
524 on the system.

525 17. Archiving

526 17.1. Read only. After completion of a process, e.g. release of a product, GMP data and metadata
527 (incl. audit trails) should be protected from deletion and changes throughout the retention
528 period. This may be by changing its status to read-only in the system where the data was
529 generated or captured, or by moving it to a dedicated archival system via a validated
530 interface.

531 17.2. Verification. When moving GMP data and metadata from one location to another in a
532 system, or to a dedicated archival system, the integrity of the data should be verified by a
533 high degree of certainty before any data is deleted, e.g. by means of a checksum. Where this
534 is not possible, the completeness and integrity of the data should be verified manually.
535 However, this verification does not alter the need for a validation of the archival and
536 retrieval process, and of the systems and interfaces involved.

537 17.3. Backup. If data is archived on a server (disk), it should be regularly backed up following the
538 same procedures as for live data (see 16 Backup). As for other backups, these should be
539 physically and logically separated from the archived data.

540 17.4. Durability. If data is archived long-term on volatile storage media with limited durability
541 (e.g. CD), this should follow a validated process. It should ensure that data is stored only
542 for a verified duration according to vendor recommendations, and if necessary, transferred
543 to new media in secure manner (see 16 Backup).

544 17.5. Retrieval. It should be possible to retrieve archived data and metadata in a format which
545 allows searching and sorting of the data, or alternatively, to allow export of the data to a
Page 15 of 19
546 tool where this is possible.

Page 16 of 19
547 Glossary

548 ALCOA+

549 An acronym for “attributable, legible, contemporaneous, original and accurate”, which puts additional
550 emphasis on the attributes of being complete, consistent, enduring and available – implicit basic
551 ALCOA principles.

552 Application

553 Software installed on a defined platform/hardware providing specific functionality.

554 Audit trail

555 In computerised systems, an audit trail is a secure, computer generated, time-stamped electronic record
556 that allows reconstruction of the events relating to the creation, modification, or deletion of an electronic
557 record.

558 Backup

559 Provisions made for the recovery of data files or software, for the restart of processing, or for the use
560 of alternative computer equipment after a system failure or disaster.

561 Change control

562 Ongoing evaluation and documentation of system operations and changes to determine whether the
563 actual changes might affect a validated status of the computerised system. The intent is to determine
564 the need for action that would ensure that the system is maintained in a validated state.

565 Commercial off-the-shelf

566 Software or hardware is a commercial off-the-shelf (COTS) product if provided by a vendor to the
567 general public, if available in multiple and identical copies, and if implemented by the test facility
568 management without or with some customization.

569 Computerised System

570 A computerised system is a function (process or operation) integrated with a computer system and
571 performed by trained personnel. The function is controlled by the computer system. The controlling
572 computer system is comprised of hardware and software. The controlled function is comprised of
573 equipment to be controlled and operating procedures performed by personnel.

574 Configuration

575 A configuration is an arrangement of functional units and pertains to the choice of hardware, software
576 and documentation. It affects function and performance of the system.

577 Customisation

578 A computerised system individually designed to suit a specific business process.

579 Electronic record


Page 17 of 19
580 Any combination of text, graphics, data, audio, pictorial, or other information representation in digital
581 form that is created, modified, maintained, archived, retrieved, or distributed by a computer system.

582 Infrastructure

583 The hardware and software such as networking software and operation systems, which makes it possible
584 for the application to function.

585 Migration

586 Data migration is the activity of e.g. transporting electronic data from one computer system to another,
587 transferring data between storage media or simply the transition of data from one state to another [e.g.
588 conversion of data to a different format]. The term “data” refers to “raw data” as well as “metadata”.

589 Multifactor authentication (MFA)

590 A combination of two of the three factors: something you know (e.g. a password), something you have
591 (e.g. a phone or smartcard) or something you are (biometrics).

592 Operating system

593 A program or collection of programs, routines and sub-routines that controls the operation of a
594 computer. An operating system may provide services such as resource allocation, scheduling,
595 input/output control, and data management.

596 Qualification

597 Action of verifying that the system (including hardware and software) is effectively designed, installed,
598 commissioned, and operates correctly. Refer to Computer system Validation

599 Regulated user

600 A company regulated under GMP.

601 Specification

602 A document that specifies, in a complete, precise, verifiable manner, the requirements, design,
603 behaviour, or other characteristics of a system or component, and often, the procedures for determining
604 whether these provisions have been satisfied.

605 Test case

606 A set of test inputs, execution conditions, and expected results developed for a particular objective, such
607 as to exercise a particular program path or to verify compliance with a specific requirement.

608 User

609 An individual user at a company regulated under GMP.

610 User requirement specifications (URS)

Page 18 of 19
611 User requirement specifications define in writing what the user expects the computerised system to be
612 able to do.

613 Validation

614 Action of proving that a process leads to the expected results. Validation of a computerised system
615 requires ensuring and demonstrating the fitness for its purpose.

616 Verification

617 Confirmation, through the provision of objective evidence that specified requirements have been
618 fulfilled.

Page 19 of 19

You might also like