diff --git a/.gitmodules b/.gitmodules new file mode 100644 index 0000000..12873f3 --- /dev/null +++ b/.gitmodules @@ -0,0 +1,3 @@ +[submodule "wiki"] + path = wiki + url = https://github.com/farlee2121/TestingPatterns.wiki.git/ diff --git a/1.README_SolutionStructure.md b/1.README_SolutionStructure.md deleted file mode 100644 index c971c7c..0000000 --- a/1.README_SolutionStructure.md +++ /dev/null @@ -1,61 +0,0 @@ -# Service Orientation - -Service orientation is a pattern of the same level as object orientation or functional programming. It determines your basic unit -of design. In OO, you basic unit is an object. In functional, it's a function. In service-oriented, it's a service and a contract. -A service is a cohesive and isolated collection of operations. Every function in the service fits one high-level purpose, typically hiding one -design descision. You should never have to look further than the service to completely understand the work that a service accomplishes. Thus, -it is imparative to have clear and simple hand-offs to and from the service. - -That is where contracts come in. They are the possible hand-off values of a service. If services represent function, contracts represent data. -The simpler your contracts are, the less you are able to have un-expected ties between services, and the less likely you are to break isolation. -As a rule of thumb, if you can't tell exactly what you can and can't do based on the the interface signature (and maybe a contract definition), it is too complex. - -# IDesign -The key to understanding the code layout is a pattern called IDesign. - -IDesign is a layered, service-oriented architecture pattern. - -IDesign has five layers. - - __Clients__ - responsible for consumption of your program. For example, user interfaces, apis, windows services. - __Managers__ - Organize the order of execution. These are the primary flows of your application - __Engines__ - The algorithms, business logic, data-manipulation type stuff - __Accessor__ - Abstract external resources for use by the application. For example, database access, file system access, external apis - __Resources__ - Anything not controlled by your code (databases, external apis, etc). These are what accessors abstract. - -Viewed in terms of orthogonality and information hiding, each layer is responsible for isolating/hiding a particular type of design concern. - - __Accessors__ - Third party code or components not fully in your application's control - __Engines__ - Computation - __Managers__ - Composition of functionality - __Clients__ - Application representation and interaction - -Layers may only call into the adjacent lower layer. (I.E. engines only call accessors, accessors only call resources). -The one exception is that managers can call to both engines and accessors. Sometimes a client may call an engine, but it is rare -that this is a good choice. - -However, engines never call managers. That would mean your processing is calling your orchestration, which can lead to many unexpected execution order problems. - - -These simple rules categorize and organize the vast majority of programming responsibility types. The simple rules allow you to quickly find -a piece of code and limit what other units of code it could be working with, thus reducing complexity. - -By considering the few rules of IDesign, you are also led to follow other known best practices like SOLID, information hiding, stable dependencies and others. - -One principle worth calling out specifically is single responsibility principle. IDesign draws attention to sneaky violations of single responsibility principle -by differentiating responsibility types. - - -# Folder Structure -The folder structure reflects the primary responsibilities of IDesign, and other primary concerns like testing. -It allows the solution to be more easily navigated and not have an overwhelming number of choices as the -number of projects grows. - -The numbers in the folders allow us to set the folder order to match the layering metaphor instead of being alphabetic. - -The shared folder is for meta-infrastructure that is needed across projects. For example, dependency injection config, data contracts, and possibly logging. -Do not be tempted to put app functionality in this folder. - -# Project Name Prefixes -Adding some namespacing to our projects (e.g. Accessors.Project or Tests.Project) allows our underlying file structure to -be organized by responsibility type and easily navigated. diff --git a/Accessors.DatabaseAccessors/TodoItemAccessor.cs b/Accessors.DatabaseAccessors/TodoItemAccessor.cs index 35d1e1f..caeff6e 100644 --- a/Accessors.DatabaseAccessors/TodoItemAccessor.cs +++ b/Accessors.DatabaseAccessors/TodoItemAccessor.cs @@ -54,7 +54,15 @@ public SaveResult SaveTodoItem(TodoItem todoItem) { TodoItemDTO dbModel = mapper.ContractToModel(todoItem); - db.AddOrUpdate(dbModel); + if(todoItem.Id.IsDefault()) + { + db.TodoItems.Add(dbModel); + } + else + { + db.TodoItems.Attach(dbModel); + db.Entry(dbModel).State = EntityState.Modified; + } db.SaveChanges(); TodoItem savedTodoItem = mapper.ModelToContract(dbModel); diff --git a/Accessors.DatabaseAccessors/TodoListAccessor.cs b/Accessors.DatabaseAccessors/TodoListAccessor.cs index ae68c7f..7d351aa 100644 --- a/Accessors.DatabaseAccessors/TodoListAccessor.cs +++ b/Accessors.DatabaseAccessors/TodoListAccessor.cs @@ -71,7 +71,15 @@ public SaveResult SaveTodoList(TodoList todoList) { TodoListDTO dbModel = mapper.ContractToModel(todoList); - db.AddOrUpdate(dbModel); + if (todoList.Id.IsDefault()) + { + db.TodoLists.Add(dbModel); + } + else + { + db.TodoLists.Attach(dbModel); + db.Entry(dbModel).State = EntityState.Modified; + } db.SaveChanges(); TodoList savedTodoItem = mapper.ModelToContract(dbModel); diff --git a/README.md b/README.md index 7edcc6f..825a396 100644 --- a/README.md +++ b/README.md @@ -10,7 +10,7 @@ As a result, it also demonstrates many other patterns and libraries that enable Here is the presentation the triggered the project: https://20xtesting.slides.spencerfarley.com -Here is a presentation that describes the design philosphies: https://1drv.ms/b/s!AjVvNQ4uturOby6OwKMFpMUlMqA +Here is a presentation that describes the design philosophies: https://1drv.ms/b/s!AjVvNQ4uturOby6OwKMFpMUlMqA This project is very much overkill for a Todo list. However, the overkill is to demonstrate the patterns in a way that can easily be transfered to a large project diff --git a/README_NamingPatterns.md b/README_NamingPatterns.md deleted file mode 100644 index 5fd4f5c..0000000 --- a/README_NamingPatterns.md +++ /dev/null @@ -1,27 +0,0 @@ -# Naming - -Naming is one of the most important tasks you do as a programmer. Name determine how coders will percieve the decomposition of the system -, how easy it is to tell what code does, and how easy it is to find code for a specific need. - -##Manager Naming: - Managers should be named for the application flow they organize. For example, UserCheckoutManager is all the tools functions needed to -help a user review their order and purchase. -Managers tend to not be centered on nouns, but on chunks of the UX (user experience). Manager are the head of your application. -If you were to get rid of all of your client (web apps, etc) and write a new one, you should only have to map manager methods to UI -components. - -The functions in a manager should be named for the purpose they accomplish, not the data that they process. - -#Engine Naming -Engines tend to be pretty easy to name. They generally have one computational purpose or one verb that they center around. - -#Accessor Naming -Accessors are intrinsically tied to a data source. They can be used by many flows and tend to center around a data type. - - -#Variable Naming -You should be able to tell what a varible in intended for, not just it's type (though in some cases, like accessors, the type indicates sufficient purpose). -This prevents mistaken manipulation of a variable and clarifies intent for later modification. - -You should also never reuse a variable. If you've changed the intent of the data, you should give it a new name to reflect the reason for the change. -For example, if you sort a list, re-assign it to a variable that declares it as sorted. \ No newline at end of file diff --git a/Shared.DataContracts/README_DataContracts.md b/Shared.DataContracts/README_DataContracts.md deleted file mode 100644 index 87c1a10..0000000 --- a/Shared.DataContracts/README_DataContracts.md +++ /dev/null @@ -1,36 +0,0 @@ -#Data Contracts - -Data Contracts are the possible hand-off values of a service. If services represent function, contracts represent data. -The simpler your contracts are, the less you are able to have un-expected ties between services, and the less likely you are to break isolation. -As a rule of thumb, if you can't tell exactly what you can and can't do based on the the interface signature (and maybe a contract definition), it is too complex. -In other words, they: - - Represent the data shared by a flow in the application - - are the only non-value types that should be available in multiple projects - - are the only non-value types returned from an interface - -Data contracts should - - be as flat as possible (very rarely contain non-value types) - - Contain only the data needed for the situation, split contracts if you have un-needed data - - Be named semantically / so that it is clear what purpose they fulfill - - Do not contain state or logic. This is both a matter of concurrency (thread safety) and conceptual clarity. - - Represent a useful collection of values, not necessarily the database - -This - - Improves iteroperability (because services use the same types) - - Limits the scope of computation-specific and project-specific types by disallowing them as a return type - - When separated from Data Transfer Objects, allows you to shape your program without concern for the database structure - - Limits ability to couple services through data, because the contracts are designed for an application flow, - not to particular pieces of code. Being flat and minimal also reduces the possibility for unintended use or - broken expectations - - Disallowing state and logic, besides enabling thread-safe operations, decouples data from actions. This helps allows us to - operate on the data in many valid ways without mixing unrelated logic. It also keeps us from distributing manager-style/organizational code, which leads to unexpected actions. - For example, you are passed an object configured to save to the db, but you want to save it to a nosql. However, you either don't know where that object is going to save - or every interested portion of code check where that object is configured to save and mutate it to fit the current need. - With separated actions and data, there can be no such confusion because the orchestrating is always left to the service - - Allows internal service changes without impact to external code (As long as the internal data maps to the contract, no related services care) - - -#Result Types (i.e. SaveResult/DeleteResult) - -Result types allow a return value with relevant meta-info. Most commonly, operation success or errors in not successful. -They allow you to handle errors in a service and normalize success/failure information for consuming services. \ No newline at end of file diff --git a/Shared.DatabaseContext/DTOs/README_DataTransferObjects.md b/Shared.DatabaseContext/DTOs/README_DataTransferObjects.md deleted file mode 100644 index 6ea69da..0000000 --- a/Shared.DatabaseContext/DTOs/README_DataTransferObjects.md +++ /dev/null @@ -1,12 +0,0 @@ -#Data Transfer Objects - -At first blush, these may seem the same as data contracts. However, data transfer objects reflect expected results -of data source queries. They allow us to encode knowledge about our data stucture for consumption in our code. -DataContracts on the other hand, should have no ties to the data structure. - -Handing off from DTOs to DataContracts allows only our accessors to contain knowledge of the datasource, while the -rest of the application is un-impacted by database changes. This makes it much easier to make data schema changes -without causing bugs. As long as the accessors map to the data contracts, no changes outside the accessor are needed. - -Adding this additional set of models can make for a lot of boring object mapping work. Fortunately, there are libraries -like AutoMapper that automate that work, making database-isolation the clear winner of the design concerns. \ No newline at end of file diff --git a/Shared.DatabaseContext/TodoContext.cs b/Shared.DatabaseContext/TodoContext.cs index 17afe8b..90e603f 100644 --- a/Shared.DatabaseContext/TodoContext.cs +++ b/Shared.DatabaseContext/TodoContext.cs @@ -49,28 +49,5 @@ public override int SaveChanges() // throw new DbEntityValidationException(exceptionMessage, ex.EntityValidationErrors); // } } - - /// - /// If the id is default adds a new entity - /// If the id is anything else, attaches and marks as modified - /// Does not commit changes. You must call db.SaveChanges - /// - /// - /// - /// - public T AddOrUpdate(T entity) where T : class, DatabaseObjectBase - { - if (Shared.DataContracts.Id.Default() == entity.Id) - { - this.Set().Add(entity); - } - else - { - this.Set().Attach(entity); - this.Entry(entity).State = EntityState.Modified; - } - - return entity; - } } } diff --git a/Shared.DependencyInjectionKernel/README_DependencyInjection.md b/Shared.DependencyInjectionKernel/README_DependencyInjection.md deleted file mode 100644 index ecd4c96..0000000 --- a/Shared.DependencyInjectionKernel/README_DependencyInjection.md +++ /dev/null @@ -1,41 +0,0 @@ -# Dependency Injection - -Dependency Inversion is the D of the five SOLID priciples. Dependency injection is a popular way of -achieving dependency inversion. - -Dependency injection turns function calls into configuration. Instead of instantiating the class you want to call, -you have a registry of forms mapped to dependencies. You specify the form of class you want, and the registry container -hands you an implementation. - -In languages like C#, the 'form' is almost always specified using interfaces. Interfaces have the benefit of ensuring a -certain level of behavior and one class can satisfy many interfaces. - -So, what's the deal. Why do we even want this? - -Academically, it breaks the dependency of the higher level class on the lower level class. You no longer are explictly -tied to the details of low level concerns. - -Practically, this - - Allows us to write code top-down, simply writing in the interfaces of the next level of dependencies we need. - This makes for less code re-work and allows us to test a flow without writing the whole dependency stack - - Allows us to swap in different dependencies of equivalent purpose with only config change - - E.g. you could completely switch storage paradigms based on execution environment - - Allows us to isolate code for testing - - -# Distributed Dependency Configuration / Loaders - -You may notice that each project (thus assembly) defines it's own dependency injection configuration. -This allows us to configure DI without making implementations public. This prevents people from using -concrete classes directly and thus breaking code isolation. - -It also packages dependencies that are used together, cutting down on use-time config setup. - -# Friend Assemblies -Along side every dependency loader is a friend assembly file. Friend assemblies are a .NET concept -that allows you to expose internal constructs to specific assemblies. - -This allows us to test against concrete classes while keeping them hidden from other consuming assemblies. - -We have a dedicated friendassembly file for simplicity of finding and modifying the friends as well as leaving -this unrelated concern out of the service implementations \ No newline at end of file diff --git a/Shared.DependencyInjectionKernel/README_DependencyInjectionKernel.md b/Shared.DependencyInjectionKernel/README_DependencyInjectionKernel.md deleted file mode 100644 index 02f4984..0000000 --- a/Shared.DependencyInjectionKernel/README_DependencyInjectionKernel.md +++ /dev/null @@ -1,7 +0,0 @@ -#Dependency Injection Kernel - -The dependency kernel allows us to centralize our dependency injection configuration. This helps prevents errors -caused by forgetting to update different clients that require configuration for new service types. - -This class allows us offer up alternative configurations (through additional methods) for clients that may only require a subset of the -dependency configuration while keeping the dependency maps in one clear place. \ No newline at end of file diff --git a/Test.NUnitExtensions/README_TestPrefixes.md b/Test.NUnitExtensions/README_TestPrefixes.md deleted file mode 100644 index bfd42fa..0000000 --- a/Test.NUnitExtensions/README_TestPrefixes.md +++ /dev/null @@ -1,13 +0,0 @@ -# Auto-Generated Test Prefixes - -Prefixing the name of the tested class to a test name produces more clearly organized test results. -It also allows us to more quickly identify trends in test failures. - -However, manually maintaining these prefixes is a pain. If you change a class name, it is easy to forget -to rename all the tests and is boring to rename all the tests. - -Auto-generating the class name based on the test subject types means we can't forget to rename the test -and cleans test name of info we already know when we are in the test class. - -Generated prefixes can also help differentiate tests that generate multiple cases, such as with integration/unit -test reuse. \ No newline at end of file diff --git a/Tests.DataPrep/README_DataPrep.md b/Tests.DataPrep/README_DataPrep.md deleted file mode 100644 index 80d7bc6..0000000 --- a/Tests.DataPrep/README_DataPrep.md +++ /dev/null @@ -1,20 +0,0 @@ -#DataPrep Pattern -The DataPrep pattern is a way of centralizing test data generation and creating readable handles for common -data requests. - -The pattern consists of two parts -1. individual data preps: these handle - - the construction of a particular type based on passed conditions - - abstraction of complex arrangement scenarios -2. data prep orchestrator: this class is responsible for - - providing a central handle for creating test data - - allowing us to configure type data preps uniformly - - allowing us to present type data preps differently for different scenarios - - -We utilize a test data generation library called Bogus. Bogus can generate complex data types, so why don't we use that directly? -Well, - - When data structures change, it is much harder to find specific uses of a library than a central method - - You still end up with a lot of noisy configuration in your tests - - Data prep produces clearer specification of test situations, especially with complex situations - - a custom DataPrep wrapper allows us to absract how we persist data diff --git a/Tests.ManagerTests/README_IntegrationReuse_WhiteDoc.md b/Tests.ManagerTests/README_IntegrationReuse_WhiteDoc.md deleted file mode 100644 index ce7746d..0000000 --- a/Tests.ManagerTests/README_IntegrationReuse_WhiteDoc.md +++ /dev/null @@ -1,51 +0,0 @@ -# Integration Reuse WhiteDoc (discovery process) - -TL;DR; Discovered that NUnit is much more powerful than MSTest - -I started out using MSTest. Integration tests could be reused in several ways - -1. Creating a second class, instantiating the unit test class with a full DI configuration and dataprep set to persist. -Then create a test method that calls into each method on the unit test class you want to reuse -- Pros: clear names and differentiation in the test runner. Can pick and choose methods to reuse. -- cons: lots of manual maintenence -2. Inherit from the original test class, modifying the constructor to specify integration mode --Pros: No additional changes when test names are added/removed/renamed --Cons: it causes duplicate names and both integration and unit runs will be directed to the inherited class, making it unclear which test is which - -3. Reflecting over a class's methods and raising an exception that highlights the failed method with a reason - - Cons: still collapses test runs in test explorer. Reflection is slow - -4. Generating a code file at build time with a plugin. It would be a pretty simple plugin with simple emitted code -- Cons: have to write and run additional code as well as install an extension - -5. Try extending TestMethod or TestClass to run tests twice - -Alternative #5 led me to explore extensibility in MSTest. In short, it is kinda limited and does not let you set the test name. -How to extend MSTest here -- https://blogs.msdn.microsoft.com/devops/2017/07/18/extending-mstest-v2/ -- https://github.com/Microsoft/testfx-docs/blob/master/RFCs/003-Customize-Running-Tests.md - -However, NUnit can handle runs with different parameters out of the box. It has much more powerful functionality and extensibility. - -NOTE: you can run MSTest and NUnit side by side, which makes it very easy to transition progressively - -The question is now how to generate differentiated names in the test runner with NUnit. -- https://www.red-gate.com/simple-talk/dotnet/net-tools/testing-times-ahead-extending-nunit/ -- https://github.com/nunit/docs/wiki/Custom-Attributes -- https://github.com/nunit/docs/wiki/Writing-Engine-Extensions -- Could probably use TestCase attributes to define integration/not on each method. A bit verbose, non-semantic, and extra init work -- NOTE: Changing the test name with something like IApplyToTest is not changing the name in the test runner -- NOTE: NUnit supports test name string formatting https://stackoverflow.com/questions/26374265/access-nunit-test-name-within-testcasesource - -- Conclusion: The rules for name generation are not very clear in NUnit. - However, I have verified that you can sucessfully overwrite the name in either - the test builder or the fixture builder. - It is not entirely clear how to extend these types while maintaining behavior - because you cannot override when you inherit from the default attributes. Instead, - the logic is available through internal builders and type constructors. - However, you can also inherit from the default attribute and 'hide' the base class's method - by creating one of the same signature. This breaks liscov substitution, but it - allows you to reuse the property logic of the base attibutes - - -BONUS: Interesting thread on using NUnit tests to augment documentation https://stackoverflow.com/questions/8727684/how-can-i-generate-documentation-from-nunit-tests \ No newline at end of file diff --git a/Tests.ManagerTests/README_TestClassReuse.md b/Tests.ManagerTests/README_TestClassReuse.md deleted file mode 100644 index 82c1fdd..0000000 --- a/Tests.ManagerTests/README_TestClassReuse.md +++ /dev/null @@ -1,17 +0,0 @@ -# Test Class Reuse - -There are to common automated test types written by developers. - - Unit tests: verify a specific piece of code in isolation - - Integration tests: verify that the components in a flow work together as expected - -Between our dependency injection and abstracted data prep, our unit tests can configure both the -dependencies and persistance of data independed of individual tests. This means that our tests -specify a situation, with out regard to execution context. - -Thus, our unit tests and integration tests now only differ by configuration. Using NUnit, -we can easily run a test class with two different configurations and cut our testing effort in half! - -Some examples are shown of test reuse with MSTest, but it is neither as easy nor as clear as with NUnit. - -We have plans to create more clear NUnit attributes that will prefix integration tests and allow us to exclude -individual methods from either unit or integration runs diff --git a/Tests.TestBases/README_TestingTools.md b/Tests.TestBases/README_TestingTools.md deleted file mode 100644 index a767355..0000000 --- a/Tests.TestBases/README_TestingTools.md +++ /dev/null @@ -1,22 +0,0 @@ -# Testing Tools - -This is sort of a catch-all doc for testing concerns and awesome libraries that make testing easier. - -# Test Dependency Generation -When testing a code, you want the subject of test to be isolated so that there are no errors from code you -don't intend to test. -Dependency injection allows us to configure test dependencies that only do what we expect for the test, but -it can be a lot of work to specify stub dependencies. - -That is where JustMock comes in. It auto-generates test dependencies and allows you to modify their behavior -in the unit test as needed. - -# Complex Object Comparison -DeepEqual allows you to compare complex objects by their values and modify comparison behavior as needed. - -# Data Cleanup -.NET transactions allow you to specify each test as a unit of work. That unit can then be committed or rolled back -as a whole. This means that you don't have to worry about leaving test data behind, it happens auto-magically. - -# DataPrep -Has its own doc, check it out \ No newline at end of file diff --git a/azure-pipelines.yml b/azure-pipelines.yml index 21a0eee..dbb28f0 100644 --- a/azure-pipelines.yml +++ b/azure-pipelines.yml @@ -23,8 +23,4 @@ steps: projects: '**/Tests.*/*.csproj' arguments: '--configuration $(buildConfiguration) --collect "Code coverage"' displayName: Unit Tests -- task: codecoveragecomparerbt@1 - displayName: - inputs: - codecoveragemeasurementmethod: 'Lines' diff --git a/wiki b/wiki new file mode 160000 index 0000000..08ba564 --- /dev/null +++ b/wiki @@ -0,0 +1 @@ +Subproject commit 08ba5643616c6a9415b90542bb38754cc7e86e34