You have to start somewhere

My major is Public Relations and my minor is journalism. Two opposites but still very intertwined. I don’t know if my career will end up in either field but what I do know is I want to travel. My…

Smartphone

独家优惠奖金 100% 高达 1 BTC + 180 免费旋转




Basic Concepts of Software Design and Architecture

If you want to pursue software development as a career or even just as a hobby, you must be able to reason and think abstractly, but what exactly does this mean?

What is abstract thinking? I define it as being able to create generalized mental models of specific processes, objects, or occurrences from the real world or a specific domain, which you can then use (i.e. “apply”) to solve problems much more easily in the future. Essentially, abstract thinking is dealing with concepts and ideas which do not exist in nature. Mathematics is the most obvious example of abstract thinking in our lives, which we take for granted. Sometimes as software developers, we also take for granted how much we are dealing with abstractions. This is especially true if we are self-taught, which is pretty much all of us, even if we have one or more Computer Science degrees.

What is reasoning? It is the process of understanding and forming new mental models from outside information. There are several types of reasoning but the three most important ones for software development are: reasoning by induction, reasoning by deduction, and reasoning by analogy. Kline describes them as:

Software is so ubiquitous in the 21st century that even laypeople talk about it all the time (“I just downloaded this great app…”) but nobody really stops to think about just what it is. We know that software is comprised of machine instructions that are processed by the CPUs of physical machines and/or virtual machines, but it is much more. Software is an intangible abstraction which has real-world applications, such as providing information to decision-makers (i.e. “business intelligence”), providing answers to specific problems, operating industrial machinery, and so on.

I’m assuming that you are programming using a high-level language (C#) which is object-oriented and operating several levels removed from the physical processor that ultimately executes your code. So now we’re in an entirely different world, in which the basic building blocks for our systems are the language keywords, programming constructs and .NET Core framework classes which we use to build software. In this sense, we might use “level of abstraction” to discuss the degree to which something in our code is separated from these basic building blocks.

As a basic guide, we can enumerate these levels from most abstract to least abstract, like this:

In general, we draw a hard line between the Layers and Components levels of abstraction: levels 1 through 3 are at the architectural level, levels 4 through 6 are at the code level. This will be important going forward.

Be aware that the term “interface” may not necessarily refer to explicit interfaces under .NET, but may also refer to things like the programmable surface of web applications, cloud services, or even hardware devices. In general, we refer to these as Application Programming Interfaces, or APIs. These typically operate at a higher level of complexity than abstractions in a programming language. See the discussion below on toolkits, frameworks, and APIs.

What is software architecture? I define it as two separate concepts.

I typically use the term in both ways but will make an effort to use “architectural template” for the second.

Software architecture is very different from, and some might say harder than, physical (building) architecture. Software architecture is abstract, multi-dimensional, and highly dynamic. When building a software system, the materials (i.e. “bricks”) are lines of code, which have a negligible cost. However, since we are dealing in abstractions and intangibles, it is harder to lock down both requirements and a finished design. As compared to real-life construction projects, software projects can quickly morph into multi-headed hydras, tearing apart budgets and timelines with terrifying thoroughness as the team struggles to deliver… something, anything. This is why modern software delivery practices such as SCRUM/Agile have become all the rage in the 21st century. It is also why you should care about software architecture concepts, and the software design principles that underlie them.

It could be argued that software systems are evolved, not designed. This is an over-simplification. It is true that you cannot design an entire system at one time, down to every last detail, and even if you could the business users would change the requirements on you as soon as you deployed it to production. In practice software systems begin with an initial design which is constructed from requirements, and then that system evolves over time as requirements change and new features are added. Oftentimes, systems change not just in response to changing requirements, but in response to an elucidation of requirements that happens over an extended period through an ongoing dialog between the technical team and the business team.

This last point is important, and many experts would agree with me, which is that software systems are emergent in nature. A system built upon sound principles, utilizing good patterns and practices, and following consistent conventions will evolve gracefully over time, be easier to maintain, and have greater longevity. As a knock-on effect, it will save enormous amounts of time and money. Conversely, a system built without any mind to principles, utilizing anti-patterns and bad practices, and having inconsistent conventions will likely miss both time and budget objectives in its initial creation, and it will both devolve and degrade over time as more people work on it. As a result, it will likely cause developer burnout, have a much shorter lifespan, and waste enormous amounts of time and money which are difficult to quantify. This is critical, and I will explain more below.

At the other end of the spectrum you have the stodgy, structured, ultra left-brained types who believe that EVERYTHING must be planned in advance and formalized into epic volumes of specifications documents with every “i” dotted and every “t” crossed. Long, tedious whiteboarding sessions involving committee discussions and political negotiations between teams occur over weeks or months of time until a consensus is reached, and business/systems analysts can compile the results into an executable plan. Those official documents are then ceremoniously handed over to the development team who, in lobotomized fashion, execute the plan to the letter with zero feedback or critical thought involved, typically following a waterfall type process. What do you think the result of this is? The answer: a system which is unmaintainable in a different kind of way!

Over-engineered solutions fail as well because nobody is omnipotent, requirements are ALWAYS missed in every project of this type, and using a command-and-control project management style deprives the team of its most valuable resource — the creativity, critical thinking, and problem-solving capacity of its constituent members, down to the developer who is punching in each line of code. The result is a system that is arcane, opaque, badly-documented (both formally and in terms of the intelligibility of the code itself), and often hacked together as just like your garden variety BBM, because the team was cutting corners in order to stay on schedule. It was built in a vacuum without feedback between the developers and the stakeholders, so it’s likely to have missed the mark entirely in regard to requirements, which is ironically what this style tries to avoid in the first place. Agile enthusiasts derogatively call this approach Big Design Up Front, or BDUF. According to legend, this used to be the norm in all corporate software environments, especially in large organizations. Modern software practices such as Agile and Domain-Driven Design came about in response to this. What they all have in common is that they seek to tighten the feedback/execution loop so that you can plan accordingly while responding to changes.

So what approach is best? Is there a best approach? It all depends on the situation. The process is more art than science, and it depends upon so many factors, many of which are impossible to quantify, such as:

As a final word, I will state vociferously that the best software development lifecycle (SDLC) process in the world won’t save a project if the developers don’t have a solid grasp of fundamental concepts, and I’m not just talking about language features or the latest cool framework. Remember what I said above about software systems being emergent? That’s right, the success or failure of any software project depends upon the knowledge and skill of each team member at a granular level. I’m talking about the concepts and behaviors that the team members understand and abide by, which is the topic of the next section.

Here it is again: software systems emerge from a primordial conceptual soup of patterns, practices, principles and conventions that the developers understand and know how to put into practice. These are the basis for both the high-level architecture and the code-level constructs that comprise the system. This is where the rubber meets the road, so to speak. Here’s what those terms mean, and the distinction between each of them.

These are a colloquial set of rules that we as developers have learned over time and apply to our work. Sometimes they are explicitly stated or documented. Other times we learn them by imitation or induction. An example may be where a developer chooses to put business logic — in the UI, in the controller of an MVC app, in a stored procedure of a database, or in a specialized business layer. Practices are functional: altering these will fundamentally change the way a software system operates under the hood.

Some practices are “good,” and some are “bad,” though there is often debate, as these can be subjective. In certain situations, there is overwhelming consensus that something is either beneficial or detrimental to software development. For instance, it is generally agreed that cutting and pasting code all over the place is detrimental as it leads to software that is hard to maintain.

As you improve your craft, your intuition will often inform you when you are encountering a possible detrimental software practice. For example, you may be looking over somebody else’s code and think to yourself, gee, every time I use this API class, I have to invoke a series of initialization methods in a very specific order or it throws an exception. Something isn’t right here. Or perhaps, in order to test for the non-existence of something through an API call, I must perform an arbitrary operation against that object and then catch an exception if it doesn’t exist. When you experience this sensation, you are in the presence of a code smell. A code smell may indicate that something is out of whack, or you may have had way too much coffee and shouldn’t have been coding for 12 hours straight, and you need to take a break. Going back to our examples above, the first instance may or may not indicate a bad software practice in that the initialization methods could have been combined into a single method or eliminated altogether, depending upon the use case of the API. In the second instance, this almost certainly indicates a bad practice on the part of the API designers, as testing for the existence of something is not an error condition, and the API should give us a better way to do this rather than forcing us to use exceptions for flow control in our logic.

Practices are granular behaviors which have a fundamental impact on your productivity and the quality you deliver. Like anything else in life, bad habits/practices lead to poor results. If you use improper form while weight training at the gym, you are likely to injure yourself. If you use bad practices when developing software, you are likely to build solutions which are convoluted, difficult or impossible to maintain, don’t do what the customer wants, wind up wasting a ton of time and money, and ultimately injure your career.

Maintainability is the most important consideration when building software. As software evolves over time, it tends to become less maintainable, not more so. Using good practices produces software that has a greater longevity and is less tedious to maintain over the long-term. Using bad practices causes software solutions to accumulate technical debt much more quickly, ultimately driving the solution over what I call the Cliff of Maintainability. This is the point at which incremental changes to the system are prohibitively costly to implement, and it is no longer feasible to keep a system running either in terms of time or money. A complete overhaul is required. You owe it to yourself, your team, your company, and the world at large to keep learning and using the best tools and practices that are at your disposal.

Just like practices, some patterns are “good,” and some are “bad,” though there is a gray area here as well. Bad patterns are referred to as anti-patterns. Junior to mid-level developers may be able to detect anti-patterns using their code smell sense. Senior developers and architects can often immediately spot an anti-pattern, call it by name, and explain the detrimental implications of that pattern to the system and the business — e.g. “Looks like they’re using Entity-Attribute-Value in the database. This will make it extremely difficult to write queries against this table and it will probably become unmaintainable in two years.”

Patterns are important because they provide recipe-like, reusable solutions to common problems, which helps accelerate development efforts. They provide a common language and lexicon for quickly describing complex concepts to other developers and architects, which helps eliminate ambiguity and increases productivity. Because they represent accumulated knowledge, sometimes acquired through trial and error, these can help to avoid common pitfalls that could waste time or even compromise the entire software project.

I believe that these represent pieces of deeper wisdom, as opposed to knowledge, about building software systems. Principles, by their very nature, demand a certain requisite level of first-hand experience in order to be applied productively, which is why you’re more likely to encounter these being espoused by a senior level developer or architect, as opposed to a junior developer. Still, anyone can learn them, and the more you know before you dig in and get your hands dirty, the more proficient you will become over time.

Here are some of the most important principles you’re likely to encounter when developing software, and a brief description of each.

This is the counter argument to YAGNI. Sometimes you need to build your solution a certain way because you know from experience that it will have to meet certain conditions in the future. This is related to another Steve McConnell concept, which is that it’s far cheaper to fix a problem upstream in the development process than downstream. Likewise, if you know for a fact that a customer will request a feature at some point in the future, it could be cheaper and more maintainable in the long run to include it at the outset.

Copying and pasting the same code construct all around your solution does not make for maintainability. What’s more, if you must make a change to that construct in one place, then you wind up making that change in all the places, which wastes time and is error prone. A good example might be a certain try/catch block to handle an error condition. Aim for code reuse, which means having that logic in one place and calling it from the components that need it.

This is the idea that everything related to a certain piece of data or functionality is together in one place. Furthermore, internal details are hidden from outside agencies, not necessarily because of security concerns, but because details are a mental burden and the system becomes more comprehensible if you keep those out of the way. This is related to the notion of information hiding.

Piggybacking off the last principle, it’s worth mentioning cohesion. This is simply a determination of how well a certain group of components/classes work together to accomplish a common purpose. Aim for high cohesion in your designs. An example of high cohesion is a class which exposes a bunch of extension methods which perform different kinds of string manipulation. An example of a class with low cohesion might be something that exposes some string manipulation methods along with methods to send an email or perform a numerical calculation. Don’t do this.

Proceeding logically from the last point, I’d like to mention separation of concerns. This is another way of saying that you should not mix your peas with your carrots, and you should not mix user interface logic in with your business logic. You’ll find this discussed much more at length regarding architectural concepts, but it basically says the same thing as aiming for high cohesion: like things go together, and components which have entirely different purposes should be kept apart.

Classes, components, layers and the like should be flexibly built so that they can be unplugged from each other without causing cascading changes to the system. This is called loose coupling. Think about it: how safe would you feel driving around in a car in which the power steering system was permanently fused to the radio, which was in turn fused to the tail lights, so that if one of them stops working then all of them stop working? In the same vein, the components you build in your system should not have hard dependencies on each other (pay attention — I’m building toward a critically important apotheosis here). I’ll explain how this is done in practice below.

The components and high-level modules you build should be loosely coupled, but they should also make it entirely clear what they depend upon in order to function. Transparency is the order of the day, and there shouldn’t be any mystery as to what’s required to use them. Think compositionally and take careful consideration of how the pieces of the system need to interact.

The SOLID principles are:

Single Responsibility. Classes and high-level components should do one thing, only one thing, and they should do it well. Abiding by this principle results in code that is highly cohesive.

Open/Closed. Originally developed by Bertrand Russell, this states that classes should be open to extension but closed to modification. Adding new features to a system should not trigger regressions, or breaking changes.

Liskov Substitution. Introduced by Barbara Liskov, this states that you should be able to take a more specific type of something and treat it as a more general type without breaking anything. This is based on the notion of substitutability.

Interface Segregation. Think of this as Single Responsibility applied to interfaces. When you are building out abstractions (i.e. interfaces), how much pain is involved in implementing them, and are all the methods of each interface cohesive? The extreme version of this is what Mark Seemann refers to as role interfaces — interfaces with a single member — and they make for much more maintainable and extensible systems.

All the SOLID principles influence each other and strive for the same objective, which is simple, maintainable code. However, I’d like to touch on the Dependency Inversion principle some more because it is so important. Sometimes referred to as Inversion of Control, in a nutshell what this principle does is provide the direction we need toward creating loosely coupled, extensible components. It is saying that you:

What is dependency injection? It’s a topic which is complex enough that entire books are written on it, but I’ll explain the basics here. It is the means by which we achieve dependency inversion, and that is by not allowing any of our business classes and core logic to create instances of the classes they depend upon. Rather, we build these classes so that they receive their dependencies as parameters to their constructors (constructor injection). Those parameters are often interfaces and other abstractions, but they don’t have to be. By not having to worry about instantiating their dependencies, our core business classes can focus on doing what they do best. An example may be a class which reads a bunch of user data from a database, aggregates it, and writes the results to a spreadsheet file. That class should receive a persistence interface which allows it to read the database data and a file system interface which allows it to write to the file system. The details of those interfaces are irrelevant. Our business class just expects them to work when it calls methods against them as part of an implicit code contract. Notice how we’ve managed to follow the spirit of multiple principles here: Dependency Inversion, Single Responsibility, High Cohesion, Open/Closed, and so on. In the finished solution, concrete implementations of these dependencies will need to be injected into the classes that need them. How you go about this is a design decision, but a common approach is to use an automated tool called an Inversion of Control container, or IOC container.

This follows logically from the Dependency Inversion principle, and it simply states that systems should be agnostic of their underlying data store, whether that’s a database somewhere, the file system, or some kind of storage medium that has yet to be invented. This is a principle that seems great on paper, but in practice is difficult to achieve without ending up with leaky abstractions, which is to say, details of the underlying data store being exposed through an interface that it’s abstracted behind.

This is a principle that I’m fond of, because it makes for interfaces and components that are easier to use. It simply states that methods should be very accepting of their input and very specific of their output. A good example in .NET might be a method which accepts a general type like IEnumerable<T> as a parameter and returns an extremely specific type such as List<T> as its result. By using this pattern, you can do things like use LINQ expressions in the method call, and you always know EXACTLY what you are getting back, and don’t have to violate the Liskov Substitution principle.

Finally, there is the principle of Convention Over Configuration. This just means that the way you name classes, methods, etc. or the way you structure your solution has functional significance in how your solution operates. This is important because it allows you to use a declarative (tell what you want) vs. imperative (tell exactly how to do it) style of programming which is extremely powerful and reduces complexity of the finished solution. A good example is registration by convention, which is instructing your IOC container to automatically scan through class libraries and register concrete classes with their corresponding interfaces automatically, based upon some criteria. This saves you having to write long, tedious registration methods to do this manually. I talk more about conventions below.

Principles, as well as good practices, help keep you on the happy path. This is an informal term describing what it is like when the development process is going well, obstacles are easily overcome, and square-peg-in-round-hole solutions are avoided.

Principles help guide the solution in the right direction, even when it gets confusing, requirements change, or other situations emerge which can cause you to feel overwhelmed. They provide a logical framework for understanding why something is a good pattern or practice. For example, the Dependency Inversion principle explains why using dependency injection is beneficial to building modern applications. Principles also inform executive technical decisions about software solutions at a higher level than patterns or practices. They determine which patterns/practices should be employed in building the solution, and which ones avoided.

Conventions are stylistic guidelines for structuring your code, naming files, classes and methods, or otherwise altering your software solution at a cosmetic level. They differ from patterns/principles/practices in that changes in convention may or may not result in changes to the actual compiled code when you build your solution. Note that changing how you name something under .NET Core, even if it’s the parameter to a method, could break compatibility against previous versions. For this and other reasons, it is worth having clearly defined coding conventions which are agreed upon by everyone on your team, even if you are a team of one.

When it comes to the application of conventions, consistency is key. Having consistent conventions makes your code more readable and thus more maintainable. It also makes it easier to merge your code into source control and handle the inevitable merge conflicts that arise when working on larger teams. Also, as mentioned above, conventions may have an actual impact on configuration or behavior of production code when using certain tools.

Conventions are important because they make your code base much more readable and give the source code a cohesive, professional appearance. This is especially important for open-source projects or corporate solutions being worked on by multiple people, because not having consistent conventions can adversely affect the usability of your solution. As already stated, they make merging more seamless and they may affect the behavior of automated tools. Overall, employing good conventions will help your solution evolve more gracefully over time and contribute to its maintainability.

There is no hard-and-fast in software development, and often you’ll need to trust your inner sense of judgment. This is exactly why I consider this profession to be more art than science.

I realize that this blog entry has run long, but I want to make you aware of some other terms you’ll run across again and again.

A framework is a comprehensive collection of tools, APIs, and other building blocks that act as the foundation for building software. For example, .NET is a framework. It includes compilers for various languages, the Common Language Runtime, and a number of framework packages that you can include in your projects. “Toolkit” and “framework” are sometimes used interchangeably, but the big distinction is in how comprehensive it is. Toolkits are generally smaller and have a narrower focus. If you get confused just remember that you build with a toolkit; you build on top of a framework.

As previously discussed, “API” stands for Application Programming Interface, and it refers to the outwardly visible classes, functions, methods, components, or other pieces that you will directly work with when using a toolkit or a framework. Think of it as the control panel for a machine, the implementation of which is hidden from you, possibly inside a (literal) black box. APIs might be part of software components that you download into your solutions using a package manager, or they might be external services that you communicate with using some network protocol, typically REST or a message bus. Note that APIs also encompass not just the methods and functions and components that comprise the API, but also the way in which you interact with them. This last point is important, because certain APIs will have greater or lesser amounts of ceremony (typically configuration steps which may seem asinine or tedious) involved in working with them.

Seriously though, I’d like to provide you with some sage advice that will help you in your life and your career as a software developer. Let’s call this the inner game of software development. I could write an entire blog series on this, but I’ll keep it brief.

If someone in your profession (especially if that person is in a managerial position) says “failure is not an option,” my best advice to you is to turn around and run the other way as fast as you can. Expectations of perfection from other people are a sign that you’re in a hell job, and you don’t deserve that. The fact of the matter is that both machines and mammals learn by making mistakes, correcting course, and then remembering the correct approach. Making mistakes is okay, and you are not your mistakes. Just make sure you learn from your mistakes, and when you do inevitably screw up, try to fail fast and fail small.

We all have an inner critic and beat ourselves up. The voice of that inner critic manifests as imposter syndrome, and it’s extremely common in this profession. If you don’t know what that is, just imagine an overwhelming feeling of insecurity because “you are a phony, you were never cut out for this, blah, blah, blah.” I experience it all the time too, and I’ve been programming computers for most of my life. No joke. I taught myself DOS 6.0 commands and started writing programs in a language called Quick Basic on a 286 when I was a kid (if you don’t know what any of those are, then yes, you are a Millennial). You know what imposter syndrome really is? It’s your ego messing with you. If you want to short-circuit imposter syndrome and silence your inner critic, here are some suggestions:

I define a mentor as an individual who has an active and personal involvement in your career development, with whom you can consult at length for professional advice and has an interest in seeing you succeed. Think of a mentor as Yoda: he or she teaches you individually, responds to your questions, and uses his/her resources to clear the way for you to fulfill your full potential. Very few people realize the awesome privilege of having a mentor but if you do, then consider yourself lucky. The rest of us, myself included, haven’t had that opportunity but there’s an alternative — finding good role models. What is a role model, as opposed to a mentor? I define a role model as a person that you may or may not know personally, and may not even still be alive. For example, Abraham Lincoln is on my list of role models. I encourage you to make a list of people you admire and then find out as much about them as you can, figure out what makes each of them tick. Then, emulate the behaviors or qualities they embodied which accounted for their success or otherwise made them good people. You’d be surprised at the knowledge and wisdom you’ll gain from this that you can apply to directly to your own life.

Parting advice: just remember that your journey is a marathon, not a sprint. There will be setbacks and obstacles that take time to overcome, but with the right attitude, commitment and willingness to learn, you can and will succeed.

In this lengthy blog entry, I explained the basic types of thinking that are involved in the effort of software development, and problem-solving in general. I laid out some basic software design and architectural concepts and discussed briefly how software is actually built. I mentioned patterns, practices, principles and conventions, and how those influence the software development process. Finally, I gave some sage wisdom representing the “inner game” of software development to help you along your way. There’s one more thing I forgot to mention… make sure you’re having fun with it too!

Add a comment

Related posts:

How to increase your blog traffic? Learn 5 proven ways to do it

Despite the effort put into running your own website, you do not know how to increase the traffic on your blog? In this post I reveal five proven ways to attract more visitors. I’m well aware that…

Kenapa Peduli Politik?

Beberapa waktu yang lalu sempat ada pertanyaan yang berujung pada sebuah perdebatan antara saya dan istri. Kami memang baru saja menikah beberapa bulan, namun wajar jika apa yang berhubungan dengan…

The Farmhand Fucked Me So Hard I Came Twice

The Farmhand Fucked Me So Hard I Came Twice. He licked my pussy then fucked me from behind so hard I came twice. Fucking in the barn is the hottest