Think about a day at work when your team is full of energy and ideas. They solve problems quickly and work well together. And you? You’re helping them succeed and focusing on the big picture. Your job isn’t just about managing; it’s about leading in a new way.
So, how do we make this happen? How do we unite a group to form a strong team that does better than we ever hoped? The answer is in encouraging initiative.
In this blog post, I’ll discuss the 7 Levels of Initiative, an idea from Steven Covey’s research. It’s not just a way to check how well the manager is doing in the team; it’s about helping our teams be their best.
The initiative is about taking action before being asked. It’s seeing a problem or opportunity and addressing it right away. In engineering, having initiative means being ahead of the game - thinking ahead, solving problems quickly, and constantly looking for ways to improve. It shows a team member is engaged, responsible, and ready to contribute creatively. In short, initiative is key for a team to innovate, adapt, and excel.
A common notion I often encounter is the belief that initiative is limited to the architecture changes or big team process changes and that this done, so there is no place to put more initiative. However, I believe the initiative starts and manifest in everyday situations, even within established frameworks of team work. The initiative isn’t just about making large-scale changes or redesigning project architectures. It’s often about the smaller, impactful actions that enhance team performance and project outcomes.
For instance, a team member might proactively arrange an immediate meeting to tackle a recurring issue, showing a keen sense of urgency and problem-solving instead of waiting for a scheduled retrospective meeting for the next week. Another example is a team member taking the lead on an small training session to share knowledge about a new tool.
In a large team (or teams) setting, it’s vital to identify signs of low initiative among team members. Phrases like “That’s not my role,” “No one told me I should do this,” or “I have never done this before” are typical indicators. These statements suggest a reluctance to step out of a defined role or comfort zone, waiting for explicit directions rather than seeking opportunities to contribute.
This mindset can slow the team’s progress, especially in larger groups where individual initiative drives efficiency and innovation. Recognizing and addressing these signs through coaching or creating opportunities for members to take on new challenges is essential for nurturing a proactive and dynamic team environment.
Stephen R. Covey’s classification of initiative comprises 7 distinct levels, each representing a different degree of decision-making autonomy. It’s crucial to recognize the shift in decision-making power between levels 1-4 and 5-7, with the latter starting with a ‘do it’ approach.
Here, team members are entirely reactive, depending on direct instructions. Typical of new hires or in structured environments, this level shows no proactivity. Such individuals don’t seek what to do next or show curiosity, solely waiting for external input.
At this level, team members start to show interest by seeking tasks, indicating a move toward greater engagement. Questions like “What can I do next?” or “Why are we working this way?” arise, directed at you, the team, or themselves.
A significant shift occurs here. Engineers not only ask questions but also provide answers, leading to idea generation. The key is their ability to propose concrete changes to the status quo, offering multiple potential solutions to a problem.
Adding to the previous level, an engineer with multiple solutions for a problem can choose one, devise an action plan, and inform others of their intent. The plan may vary in scope, from a few days of changes to significant team process adjustments or large refactorings in the code base or larger architecture changes.
Autonomy increases as team members act independently and report back immediately. This level reflects trust and a solid grasp of responsibilities. It’s evident when someone updates you with results and conclusions from their independent work.
Team members at this level take complete ownership of tasks, making decisions independently. They align closely with the team’s vision, requiring minimal oversight. Periodic updates are agreed upon to monitor progress.
The pinnacle of initiative, where engineers act independently and only communicate the completion of their tasks or changes, bypassing progress updates to managers or tech leads.
Understanding the levels of initiative is crucial in delegating tasks effectively. When I assign a task, I also relate to the initiative level which helps set clear expectations. Early in my role as an engineering manager, I learned this the hard way. My delegation approach was vague, leading to varied outcomes based on the initiative levels of my team members. For instance, asking someone to work on “feature X” without specifying the expected level of initiative could result in a barrage of questions (level 2), a list of proposed solutions (level 4), or even a completed feature (level 7).
This experience taught me the importance of clear communication and setting precise expectations about the desired level of initiative for each task.
When dealing with higher levels of initiative, several factors become increasingly important:
At higher levels, accountability plays a significant role. Often, people may shy away from accountability, possibly due to fear of failure or lack of confidence. However, true ownership of a task requires being accountable and making decisions. Being accountable means not just delivering what is agreed upon but also proactively seeking ways to enhance the product and improve processes. A proactive approach and higher levels of initiative are fundamental to this ownership.
In my 1-on-1 meetings, I aim to challenge my team members to ascend to higher levels of initiative - of course, tailored to each individual’s capabilities. However, it’s important to recognize when someone might be pushed too quickly to a higher level. For instance, if we agree on level 6 but the engineer struggles and makes mistakes, it’s crucial to be able to step back to a lower level of initiative. This adjustment should be clearly communicated, specifying the duration and method of this change. Such flexibility and guidance are key to effectively nurturing and developing each team member’s potential.
To boost initiative within a team, focus on these three strategies:
It’s essential to communicate that initiative and proactive behavior are important. Clearly define these behaviors and how they align with the team’s goals and the overall project vision. Misunderstandings often arise when managers expect initiative but don’t clarify what constitutes ‘good performance’ or ‘desirable behavior.’ In my experience, discussing the ‘7 Levels of Initiative’ with team members clarifies these expectations. This is something I do with my teams. Engineers need to understand that their performance is evaluated not just based on task completion, like delivering features or fixing bugs, but also on their attitude and behavior.
Enhancing initiative in your team means empowering them at various levels. Start by assigning meaningful tasks challenging and developing their skills, signaling trust and encouraging ownership. Foster an environment that values autonomy, allowing team members to make decisions and bring their ideas to fruition. Open communication channels encourage sharing ideas and feedback, and embracing calculated risk-taking, with failures seen as opportunities for learning, is crucial. I’ve shared this model and discussed this with my team. That’s why when we talk, it is easier to discuss daily topics.
Understanding individual motivators and ensuring alignment with team goals is important. Recognize that not everyone will jump from lower to higher initiative levels instantly; gradual progression is key, along with providing room for mistakes and learning.
Acknowledging and rewarding initiative is more effective than providing constructive feedback.
Consider my personal story: It was a warm and sunny day. My son (4 years old) was eager to wash our new car, as he was excited about it. We started with preparations. I asked him to prepare water with car shampoo as he enjoys playing with water (aligning the task to the motivators). Unfortunately, he used the entire bottle of shampoo instead of a few drops. Rather than penalizing him, which could discourage future help, guidance is more effective. Truth to be told. A bottle of car shampoo is not expensive. He was so passionate about cleaning the car itself that he needed a strong reward for the whole work.
Similarly, positive feedback and rewards foster a culture of initiative in a professional setting. Rewards can be financial, like bonuses, but often more impactful are
In the context of performance reviews, understanding initiative goes beyond just completing a task well. As a manager, I place significant emphasis not only on the outcome of the task but also on the level of engagement and involvement demonstrated by the team members in accomplishing it.
The initiative is reflected in how an individual approaches a task - their ability to think proactively, anticipate potential challenges, and engage with the task creatively and enthusiastically. It’s about showing eagerness to take on responsibilities, to contribute ideas, and to go the extra mile. When reviewing performance, I look for signs of this deeper engagement: Has the team member shown a willingness to learn and adapt? Have they taken steps to improve processes or collaborate effectively with others? The important aspect is that this relates to the task, not the engineer. Each engineer is assigned to many tasks; the initiative level is correlated with the task, not with the particular engineer.
In essence, initiative in performance is about the attitude and approach towards work, not just the technical execution of tasks. It’s this comprehensive view of performance that truly reflects a team member’s contribution and growth potential.
When evaluating candidates, I’ve encountered at least two times engineers who excel in coding and system design, yet exhibit a noticeable need for more initiative in their behavior, thus having to be rejected.
This presents a unique challenge, especially in roles where proactive engagement and the ability to drive projects independently are crucial (especially senior roles). During the behavioral interview, it becomes apparent that while they possess strong technical skills, their approach to problem-solving and project management might be more reactive than proactive. This gap is significant because, in dynamic work environments, the ability to anticipate challenges, propose solutions, and take charge of situations is as valuable as technical proficiency. Addressing this during the interview involves probing deeper into their past experiences and scenarios where they might have taken the initiative. It’s also about evaluating their potential to develop this skill, considering the team’s current dynamics and the support structure available to nurture such growth.
In this post, we explored what makes a great engineering manager: seeing your team work well on their own. We talked about the 7 Levels of Initiative from Steven Covey’s ideas, showing how important it is for team members to take action on their own, from big project changes to small daily tasks.
We learned that it’s key to be clear when you give tasks to your team, so they know what you expect from them in terms of taking charge. Higher levels of initiative need trust, skill, willingness, and creativity. It’s also about being okay with taking risks and being responsible for your work.
When looking at how well team members are doing, it’s not just about finishing tasks. It’s also about how they approach their work and if they’re willing to try new things. For bigger teams, it’s important for each person to manage their own work because the manager can’t guide everyone all the time.
Lastly, we saw that in job interviews, it’s important to find people who can think ahead and solve problems on their own, not just those who are good at coding.
In short, for a team to be successful, everyone needs to be able to think for themselves and be willing to take on new challenges. This helps the team work better and makes the job of an engineering manager rewarding.
]]>While I’m no Shakespeare, writing gives me a sense of clarity. It’s a medium where I can truly articulate my thoughts and share knowledge. But let’s face it, written communication has its challenges—no facial expressions, no tone, just words on a screen. Understanding this gap is crucial for effective communication. This blog post aims to dig into the complexities of written communication, offering you a detailed roadmap for improvement.
Whether you’re drafting an extensive project plan or shooting off a quick Slack message to your team, the principles outlined in this blog can elevate your written communication across all platforms.”
Why written communication is challenging?
In verbal communication, tone, pitch, and facial expressions play a significant role in delivering your message. These non-verbal cues can help clarify meaning, show emotion, and give nuance to what you’re saying. Written communication lacks these elements. Words on a screen don’t carry tone or facial expressions, making it easier for the message to be misinterpreted. For instance, what you write with a positive intent could come across as neutral or even negative to the reader.
In the realm of engineering management, the ability to communicate effectively is not just a nice-to-have skill; it’s a critical asset that can make or break projects and teams.
Firstly, it’s often the foundation of remote and distributed work, serving as a record for accountability.
Secondly, clear writing minimizes misunderstandings, saving time and resources.
Thirdly, your written words set the team’s tone and culture, impacting motivation and work environment. Lastly, strong written communication skills enhance your reputation as an effective leader.
Understanding these reasons gives you the motivation to improve, setting the stage for the practical tools and strategies that follow.
Mastering written communication in engineering management involves more than just joining words together. There are several crucial elements that make your messages effective, clear, and impactful.
Your intentions should be transparent in any written communication. This is particularly important when delivering tough news or tasks. By clarifying your intent, you help the recipient understand the ‘why’ behind your message, which build trust and transparency, even in less-than-ideal situations.
Instead of a vague, “The project will be delayed by two weeks,” you could write:
“We’ve identified software bugs in the driver during the performance testing phase that impact the functionality of our feature, leading to a two-week delay in our project timeline. The goal of this message is to align and inform all our dependent teams. We are dedicating extra engineering resources to fix bugs and have scheduled an additional BugBash for November 21st. This is to ensure the highest quality before the upcoming release.”
In any form of communication, context is key. Your reader doesn’t automatically know what you know; they only see what you’ve chosen to share in the written format. This limitation can lead to misinterpretations or misplaced emotions if the necessary background is lacking.
In my early managerial role, I used to send messages that were consisted mainly of questions. For example, I’d ask, “Can you update me on Epic X?” or “Have we resolved the issue with the database?” While these questions were clear to me, they were often met with confusion or delayed response from my team. Without providing context—like I was accidentally creating a communication gap.
It wasn’t until I started including additional information and background that I noticed a positive shift. When asking for an update on Epic X, I’d add, “We have a client meeting this Friday, and it would be great to demo the functionality provided by the Epic X” Or for the database issue, I might say, “The senior management is reviewing system performance next week, resolving this issue could impact their view positively on our team.”
An intriguing but often overlooked aspect of written communication is the tendency for your words to shift in emotional weight when read by someone else. Understanding this can help you become more effective in conveying your thoughts and feelings accurately.
Here’s what typically happens:
How do you feel when you read those message?
Being aware of this tendency can help you make adjustments to your messaging style. It’s essential to go the extra mile to clarify your intent, making sure that your positive messages are overly positive, your neutral messages carry a hint of positivity, and your negative messages are cushioned with understanding and context.
It’s also equally important to consider the emotional state of the person on the receiving end. Your words don’t exist alone; they are processed through the lens of the reader’s current feelings, experiences, and concerns. If your team member is dealing with a personal crisis or even just had a rough day, their perception of your message can significantly differ from your intent. For example, a simple request for a status update could be seen as adding more pressure, making their day even more stressful.
Taking a moment to understand the emotional climate can make a big difference in how your message is received. If possible, start your communication with a quick check-in or a brief acknowledgment of their situation. Something as simple as, “I understand you’ve had a busy day, but when you get a chance, could you update me on the project?” can set the right emotional tone.
Acknowledging the emotional state of your receiver means communicating with empathy and awareness, which helps to build a foundation for more meaningful and effective interactions.
In the quest for efficiency, it’s easy to use quick, command-style messages. For example, if you often say “my ask is” or frequently use commands like “do this,” “register here,” it can make you seem overly controlling or as if you don’t trust your team. Let’s see some examples:
Example 1
Let’s say you often tell your team, “My ask is to complete the report by end-of-day.” While you might think you’re being clear, your team may feel pressured. They might wonder why you’re focusing on “my ask” instead of the team’s shared goals.
To avoid this, try rephrasing your requests to be more of a team effort. Instead of saying, “My ask is to complete the report,” you could say, “Could we aim to finish the report by end-of-day? It would help us make progress.” This way, you’re inviting teamwork and respect, without making the task seem less important.
By paying attention to how you word your messages, you can be both clear and supportive, making sure your team feels encouraged rather than pressured.
Example 2
We all have moments where we need to send a message in a hurry. In such instances, you might be tempted to write something brief like, “Need update now.” However, without context, this can make the recipient anxious or defensive, interpreting the urgency as a sign of mistrust or dissatisfaction.
Consider adding a sentence to offer context, especially when you’re pressed for time. For example, you could say, “Apologies for the brief message; I have only 2 minutes but wanted to check in. Need an update now, if possible.” This added context can transform the message from seeming demanding to being understood as a necessity of the moment.
By including even a small amount of additional information, you offer the other person a chance to see things from your perspective.
Before hitting the ‘Send’ button, it’s beneficial to pause and review your message one last time. This final check serves as a practical tool to ensure you’ve communicated your intent and context clearly. Ask yourself these questions:
By making the ‘Final Check’ a routine part of your written communication, you contribute to a more understanding and effective exchange of ideas. It takes only a few seconds but can save you from misunderstandings that take much longer to resolve.
In the early stages of my managerial role, I had the habit of responding immediately to Slack messages. While this might have appeared as a sign of being engaged and responsive, it often led to hasty replies that lacked nuance and thoughtfulness.
Recognizing the downside, I shifted my approach. Instead of replying on the spot, I began to create draft messages first. This allowed me the opportunity to revisit them, add necessary context, or adjust the tone before sending. This approach served as a ‘safety net,’ ensuring my responses were well-considered.
Eventually, I adapted this further. Now, I simply save messages that require a thoughtful response for later. I don’t even write draft responses. This gives me the time to assess the importance and context, allowing me to craft a message that provides value and clear communication when it’s finally time to send it.
This change in habit, specific to Slack communications, has been a small but impactful step in enhancing the quality of my interactions with my team. It allows for better judgment and leads to more effective, meaningful communication.
You may have noticed that this blog post is structured following the principles of The Golden Circle, a concept promoted by Simon Sinek. It began with the ‘Why,’ emphasizing the importance of effective written communication. We delved into the ‘What’ by discussing the key components that make your messaging clear and impactful. Finally, we explored the ‘How,’ offering practical tools and strategies you can implement immediately.
By adopting Golden Circle format, my aim is to not just provide you with information but to truly engage with you on why this topic matters, what you can gain from mastering it, and how to go about it. This approach seeks to ensure a comprehensive understanding of the subject, thereby empowering you to improve your communication skills effectively.
The ‘What If’ section isn’t just about laying out possible scenarios and solutions; it’s also about understanding who your audience is. Knowing the people you’re communicating with allows you to better anticipate their questions, concerns, or even objections. By doing so, you can proactively address these in your ‘What If’ segment.
Imagining how your message might be read or interpreted gives you the chance to clarify points that could be misunderstood. This level of audience awareness adds another layer of effectiveness to your written communication. It not only prepares you for various reactions but also offers a safety net for those who are reading your message, making the entire communication process more seamless and effective.
Identify areas where your written communication might be lacking. Is it in clarity, tone, or context?
Before writing, clearly outline what you want to achieve with your message. Make it a habit to restate this at the end of your communication.
Make sure to provide background information when discussing projects or asking questions. A well-contextualized message minimizes misunderstandings.
Practice writing messages with a tone that matches your intended emotion. Reread your message, imagining how it would feel from the receiver’s perspective.
Prioritize checking in on the emotional state of your message recipients, especially before delivering important or sensitive news.
Shift from issuing orders to inviting collaborative action. Use phrases like “Could we” instead of “I want.”
Before sending, perform a quick review. Ensure clarity, tone, and context are appropriate.
Rather than replying instantly, draft your thoughts or save messages for later. Use this time to enrich your message with context and tone.
Embrace the ‘Why-What-How’ approach to structure your communications, be it emails, presentations, or meetings.
Know your audience well enough to anticipate questions or objections they may have, and address these proactively.
By following these steps, you’re not just improving your written communication, but also fostering a culture of clarity and mutual understanding within your team. Each step is designed to build upon the previous, setting you on a comprehensive path to becoming an expert in written communication. Feel free to adapt these steps according to your own needs and experiences!
]]>This simple example from carpentry resonates deeply with the complex world of software engineering. Have you ever heard of o Software Engineer who does not like to be named Craftsman? The nuance between outcome and output is not philosophical and has profound practical implications. As software engineers, focusing on both aspects can lead to projects that are not only technically sound but also aligned with user needs and business goals.
What is Output?
Output in the software engineering context refers to the tangible products or deliverables at various stages of development.
Examples:
Short-term goals align well with immediate targets and development sprints. Focusing solely on outputs may lead to missing the bigger picture or neglecting user satisfaction.
What is Outcome?
Conversely, the outcome emphasizes the broader impacts of those outputs, focusing on how the end-users interact with the software or how it impacts the overall organizational goals.
Examples:
Outcomes often correlate with the product’s ultimate goal, aligning activities with overarching objectives. The hardest part of outcomes is ensuring the software meets the real needs of the users. Often outcomes may require a long time to materialize and might be multifaceted - but not always. In my career, I found that small changes (in the processes or code) may produce significant outcomes.
When to say that an outcome is not delivered? For example, when the users love the product, engineers are elated about the code, but the product generates a loss, for example, when the cost of running it is too high.
Software engineers often need help balancing immediate outputs and broader outcomes due to the inherent complexities and conflicting priorities in software development. On the one hand, there’s pressure to deliver tangible results quickly, such as completing specific features or meeting sprint deadlines, which emphasizes the output aspect. On the other hand, the broader outcomes, like enhancing user satisfaction or aligning with strategic goals, require a more nuanced understanding of the end-users needs and the organization’s long-term vision. This can necessitate careful planning, collaboration with various stakeholders, and continuous feedback and iteration. Striking the right balance demands technical proficiency, strategic thinking, empathy towards users, and a precise alignment with project-level objectives and organizational goals.
Maintaining the balance between immediate outputs and broader outcomes is a shared responsibility that involves multiple stakeholders within the software development process. While software engineers are at the core, ensuring that their work aligns with both short-term deliverables and long-term goals, project managers and product owners play a vital role in defining clear expectations and keeping the focus aligned with overall objectives. Ultimately Engineering Managers and product owners are responsible for the product. How to distinguish Junior Engineer from Senior Engineer? The difference is that the first should focus mainly on outputs, while the former should focus more on the outcome.
Nevertheless, achieving this balance is a collective effort that requires clear communication and alignment across different roles.
Because of my role as an Engineering Manager, I’ve to asses engineers’ work. In evaluating a software engineer’s performance, both outcome and output can play distinct roles. Traditionally, performance might be assessed based on tangible outputs such as the number of features developed or bugs fixed. While these metrics provide a clear and measurable way to gauge productivity, they can sometimes overlook the broader impact and alignment with organizational goals – the outcomes. I believe the solution is the balance. Balancing both output-oriented metrics and outcome-driven evaluation ensures a more holistic understanding of an engineer’s performance, rewarding the quantity of work, quality, and contextual relevance. This approach promotes a culture that values meaningful contributions and encourages engineers to strive for excellence that resonates with immediate tasks and broader organizational vision.
Balancing output and outcome requires a thoughtful and multifaceted approach that combines various strategies.
The phrase grimly illustrates a situation where the immediate task was completed (the operation or output). Still, the ultimate goal was not achieved (the patient’s survival or outcome).
In software engineering, creating a technically flawless piece of software (successful operation) that fails to meet the users’ needs or business goals (patient died) is quite often.
This underscores the critical importance of balancing output and outcome. While outputs are essential for measuring progress and achieving immediate goals, outcomes preserve the process’s broader vision, good software, and user satisfaction.
In Polish, we also have two separate words to distinguish between Output and Outcome.
Understanding these nuances in translation is crucial, especially when discussing matters in a business or technical context in Polish. This ensures that the intended emphasis, whether on immediate deliverables or broader implications, is clearly communicated and understood.
]]>Before we start, I have good and bad news.
The good news is that at some point you will have to unlearn what you have learned. If you learn something, you are becoming blind to other things that you can learn. Remember, you are a creative person; your mind should be free and open for new waters. Do not be afraid to forget stuff.
The bad news is that you will probably never stop learning.
Over time, some things became similar, and we use similar design patterns in different contexts. Even that something is similar, it is not an excuse to stop learning. You need thousands of hours of practice and writing software to be good at it. Besides spending many years writing software, I know that I have deficiencies in some areas that I want to improve.
I’ve tried to build this competence area map to be as universal as possible. Nevertheless, I realize that backend software engineers found it applicable, whether other specializations may found it less useful.
If any of those concepts are new for you, do not try to learn everything in one week. Learning about software engineering is like going down the rabbit hole. You think that you get it, but then you realize that there is another level of abstraction or other stuff to discover. If you are a person who likes following the rabbit holes, this guide may be for you.
It all started with writing down the desired competency list for myself a few months ago. In the same way, many of us do a medical examination, and I wanted to do my professional skillset examination. I did it in Google Sheets. The total number of rows that I’ve written was 50 - each describing skill, knowledge area, or competence that I want to build or improve in myself.
Hard technical skills are a necessity when you are a software developer. While I was writing down technicals skills, I divided them into three groups:
General technical skills are skills that are not going to be outdated promptly. On the other hand, practicing those skills consumes much time, and in my opinion, you need years to practice them, rather than months.
Algorithms. I often get alarmed whenever I hear that someone is stating that “Learning algorithms is a waste of time.” Studying algorithms is the essential thing that you can do when you learn computer science. Algorithms are everywhere. Algorithms are often quite different from one another. By studying more and more algorithms, you may develop different solutions for the problems which arise during a regular job. I have to admit that I learn mostly from my experience. Studying algorithms is an eye-opener for new approaches on how to solve things. If I hadn’t studied algorithms, I would propose worse code, simply because I wasn’t familiar with some techniques.
Data structures. Each data structure is designed to arrange data to suit a specific purpose. Sometimes we want to find data quickly or store quickly. Knowing fundamental data structures, like Hash Maps or Trees, is a necessity. Also, it is vital to be familiar with more advanced data structures like probabilistic data structures, immutable data structures, or when we talk about BigData - distributed data structures.
Programming. Can you write code in your primary programming language without an IDE (Integrated Development Environment)? Can you do that using only a text editor and compiler? Without knowing language syntax, builtin libraries, data structures, fundamental data types, it is impossible to be productive. Although I use IDE on my daily basis, it provides me little to none help when it comes to solving simple and primitive code mistakes.
Architecture. It is easy to understand, maintain, and develop a system with clean and transparent architecture. There is no silver bullet here. You have to know at least a few architectural styles to communicate your ideas, and to understand others. You have to understand the basic concepts. When someone is saying to you, “Event sourcing is ideal for this problem.” you need to be on the same page to discuss details of that architectural style.
Database. In the initial steps of designing a new system, a database might be a nonrelevant detail to your application architecture. Sooner or later, the database is an element that cannot be ignored. Embrace the polyglot database style. Analyzing database queries, managing indexes, configuring partitioning, and replication is a piece of must-have knowledge. There are far too many different databases on the market to know each well, but you have to know one database at least well and know when to use which database.
Design patterns. If your primary language is object-oriented, you have to know basic design patters (reading a book about design patterns may help). Besides that, there are also more rules, like SOLID, KISS, DRY. There is also DDD and CQRS, which are more like architecture styles. Use those patterns, but do not overuse them.
Coding styles. There are different approaches on how to write code. There is TDD (Test Driven Development), Test First Approach, Pair programming, writing PoC (proof of concept), or writing scripts for one-time use. Each of these styles aims to achieve different things. Try different things and know the advantages and disadvantages of different approaches.
Concurrency, no wonder, is the scariest topic for many developers. It’s hard to test, debug, and reason. Not only you have to understand the behavior of concurrent programming in your language, but you also need to understand your platform well. For example, take JVM. At first glance, you download it, and it works. For me, this is a typical example of a rabbit hole. You have to investigate the Java Memory Model, Garbage Collection, Just-in-time compilers, bytecode. You can not study it in just one day. It may take you weeks to investigate all this stuff.
CI and CD. I don’t pay much attention, whether it is Jenkins, Bamboo, or GitLab CI. Pick one, and know it well. Understanding why we do Continues Integration or Continues Development is an important aspect. The tool does not matter. The right configuration to achieve the right goals is crucial.
Security. This a vast topic. Essential is asymmetric cryptography (public/private key), authentication methods, access delegation methods - OAuth. On the other hand, you have to know different vulnerabilities (like SQL Injection or Cross-Site Scripting). Then it is essential to realize what types of malware there are and what are the root causes of exploits. And last, you have to know your technology stack, to design and build safe applications.
Network communication. The first thing to know is “8 fallacies of Distributed Systems.” It helps you to understand the behavior of a network application and characteristic of network communication. Then, digging deep into the network stack helps you understand network communication via modern protocols and debug problems, which sooner or later happen.
Rapidity. Practice, practice, practice until you achieving fluency in general programming. After a few years in software development, you should be able to swiftly develop a new application, fix a bug, or re-engineer a complete system. Anyone can probably write any system, given no time limit.
Supporting skills are skills that are not your core competence. Instead, knowing those things helps you see the complete spectrum of the Software Engineer Toolbox. Although you do not have to be an expert in each aspect, you have to be familiar with each aspect, and be able to develop something in other areas.
Distributed systems. Truth be told, I don’t think that I will ever write a system that runs on a single computer. Because we are not able to build faster CPUs and single machines in general, it is reasonable to host your services on commodity class machines or in a cloud in a shared environment. We all need to understand how to design, develop, and maintain a distributed system.
Statistics and math. I noticed that I often use mathematical statistics concepts, which I learn in School. It is Percentiles, standard deviation, quartiles, mean, distribution. I use it all the time, for example, in analyzing service response times or working with almost any kind of data.
BigData. It is good to have some experience with Hadoop, Spark, Job Scheduling mechanism (like Airflow), Stream processing. In the world where we treat data as a bar of gold, you have to know how to work it. A few years ago, it was hard to find someone doing big data, but now I think it is a necessity.
Web . A full-stack developer is a most wanted developer. In my opinion, people claiming to be full-stack developers, most of the time, prefer frontend or backend. It is good to understand both worlds. If you prefer backend, you shouldn’t be afraid to change the frontend and vice versa, but hardly I see experts in both words.
Machine Learning. If you are a Software Engineer, machine learning is an entirely different career path. Machine learning models are becoming more and accessible, so you will only benefit from knowing how machine learning models are built or how to train and work with an artificial neural network.
Unix. Why do I recommend Linux or Mac for a Software Engineer? Because of Unix Philosophy. For example, text Processing Toolset. You may state that a Software Engineer should write software and should not work with raw files. Far too often, I see a use-case for tools like sed, awk, or even simple grep does the job. That and many other tasks are simple in Unix.
DevOps toolbox. In your company, your infrastructure for services is probably ready and configured, and there is a dedicated person to run and manage that thing. Unfortunately, problems are unavoidable. Knowledge of how and where your services are running may be crucial when there is some failure.
Site Reliability Engineer mindset. By this, I understand how to release, monitor, manage an emergency, and all things that you can read in a free by Google Engineers Every software developer who has an application on production should embrace those concepts.
Computer architecture. Do you know a computer or server architecture? Do you know about CPU caches, RAM, or network bandwidth? Do you know the limitations of hard disks? It is essential to understand bare metal machines and their physical limitations. It is also good to know about computer construction. Nowadays, we tend to go to the shop and buy a notebook. It might be an unusual experience to build a PC from scratch by yourself.
In this section, you have to answer what is trendy right now. Take a look and write your answers. Do not be sentimental about past technologies. It is not a confession.
Some choices are clear to me and rather stable - Git as a version control system. Your primary programming language probably influences other choices. That is fine, as soon as you are aware that you are not working in obsolete and deprecated technologies.
My programming language of choice is Java (but I have a remarkably pleasant experience with Kotlin), with Gradle as a built tool and git as version control, with project reactor. I know that the project reactor is ugly. Keep in mind that the topic of asynchronous and non-blocking communication is continuously changing. At the begging it was CompletableFuture, then it was RxJava. Now it is a choice between Project Reactor and Kotlin Coroutines.
Technical skills are not everything. Software Engineers work in teams, so communication skills are as relevant as the ability to code.
Communication. For me, excellent communication is not about speaking or giving a great presentation. Speaking may be a monologue. Dialogue is when you listen actively to others. Never assume that you know other person intentions or goals. Paraphrase and ask for clarification.
Be authentic. Do not pretend someone you are not.
Language: Business and technical. Communication is tricky. Business people tend to see profits and results, unlike technical people, who tend to see problems and potential bugs. You have to learn how to communicate your ideas to different groups of people.
Respect. It is a magic glue that holds teams together. We are all equal, so we should treat everyone in the same way. Our workplaces should allow expressing ourselves freely. You have to do it professionally and cut down any insult or aggression.
Willing to help. Giving useful information or advice to your teammate is a great way to build a relationship. Soon you realize others became willing to help you in the moments when you need it.
Give and receive feedback. Providing high-quality, fact-based, structural feedback is hard. What is harder? Receiving feedback. No one likes to be criticized, and far too often we make it personal. When we make it that way, we became defensive. Thus we do not accept the feedback and remove any chance to improve ourselves.
We - software developers - do not make software just for fun. We do not use design patterns because someone at the conference said so. Our product should be the most useful for our clients, with the right features; hence we built maintainable software that is easy to extend.
Focus on product, not tasks . It is easy to forget about the product when working with some issue tracker. Task after task, swiping from TODO column to DONE. Stop for the moment and look for business value gain in those tasks. If there is little to none product improvement in your tasks, there is something wrong. On the other hand, if there are only business tasks and no time for maintenance (or technical debt), there is also something not right.
Propose new features. Although stakeholders or business people are good at business, they are not experts in technology. I’ve noticed that the best ideas and solutions came from technical people - developers, designers, product owners. Having a good idea is one thing, but having the courage and charisma to propose your ideas is another thing. If you have good ideas, you also have to communicate them effectively.
Public relations. There is going to be a time when your system will not be working correctly or will be down. It may be due to deployment of the new version of the system, hardware or network failure, wrong configuration. No matter what was the cause, you have to clean up the mess, and you have to it professionally. This process usually has three steps:
a) Fix the problem. Make the system healthy again.
b) Repair potential damage.
c) Introduce changes to avoid similar emergencies.
What is essential during this time is communication. You have to communicate what happened, what was the impact when the solution is going to be deployed. In the end, writing a blameless postmortem is a thing that helps the whole organization to learn and improve.
Predict (next) requirements. You have written and deployed your first service from scratch. Soon after that, there are new requirements to develop in that service. There are features often omitted (sometimes on purpose) in the first version of the service and always required in the next iterations. I find that the most popular cases are:
a) Reporting and analytical module.
b) Authorization mechanism.
c) System readiness for A/B testing.
d) Data search features.
You have to predict or at least talk about those common features at the beginning of the project. Discussing those may help you better plan and organize the design of your system.
During my time at university (5 years at Warsaw University of Technology), the most important thing which I learned was learning how to learn. It is vital to know your possibilities in self-learning. Most of the things that I know are from self-study. It is apparent that I had and still have mentors that point me in a direction, but I’ve to learn by myself. Below are 6 things that I use to learn new things.
Books. My average peace of reading is about 15 books a year. It is not much. I’m trying to read books that suit me well. I use Goodreads to find new inspiration. You can see books which I read here
Blogs. There a lot of useful blogs to read online. I’ve discovered many exciting blogs from my friends, who shared with me interesting articles. I’ve also added those blogs to my RSS Reader: Feedly so that I can stay up to date.
Podcasts and vlogs are modern forms of blogs. I can listen to them on the bus on the headphones. It is the most convenient form for me.
Conferences (offline). I like traveling to conferences. I’ve to admit that my attitude toward the conference changed. At the beginning of my career, I was eager to listen to every conference talk. Now, most value for me is meeting with people, discussing novelties and trends.
Conferences (online). Nowadays, conference talks are available online - on YouTube. Because you can speed up or pause the video, I prefer this form of watching conference talks.
Workshops. I love attending workshops. During the workshop, you have a unique ability to focus on a problem, experience new tools on hand. You can discuss the unknowns with the mentor.
Knowledge sharing. And by this, I do not mean to be a rock star speaker or world-famous blogger. It is about helping your colleagues every day, about small things. Whenever you are giving a piece of small advice, you often make sure that this is the right and correct, you often do some research and discover new things. Maybe someone else has a different point of view, and you can learn something new. The code review process is a great place to start. Famous polish speaker Jacek Walkiewicz said: “Kto przewiezie innego człowieka swoją łodzią na drugi brzeg, sam też tam dopływa.” My translation is: “Who will carry another man with his boat to the other side, he also arrives there.” You have to notice if you support someone else, you also support yourself.
What motivates you? Are you desirous to wake up early and code, or maybe each day in the office is a nightmare for you? For me, if I weren’t a geek, I couldn’t be a software engineer.
Look what is under the hood. No matter what project you do, sooner or later, something goes wrong, and going deep is inevitable. You may debug your framework or analyze the internals of your database. You should not be afraid to do that. Studying how your toolset works before is even better.
Passion. I firmly believe that you have to love computers and software to be successful as a software developer. Only if you are devoted to something, you can be successful at it. Some people say that they have never worked a day in their life because they love their job so much. In my opinion, passion helps help you wade through the hassle, which happens from time to time.
Curiosity. Steve Jobs once said: “Stay hungry, stay foolish.” I continuously try to do better. Rewrite some method, refactor some piece of code, try to redesign architecture to be more resilient. Never assume that you are an expert in something - there is always a bigger fish. Experiment, discover new things and have fun.
Although we have smarter IDE’s, better tools to design software, more productive programming languages, core principals of software engineering didn’t change much. Our computers use Von Neumann architecture, introduced in 1945. For over 50 years, our programs are still often imperative focused on achieving specific goals.
Stability is a good thing because core concepts are the same; you only need to change tools. We place great value on configuring and designing our programs to be resilient to an emergency, network failures. Smaller or bigger innovation is a natural part of our process. The only constant in life is a change. Sooner or later, you are going to unlearn what you know and learn new things. What is going to be? I can not predict the future.
At the beginning of 2019, I thought that I read approximately about 10 books a year, but I wasn’t sure. My regular peace is one book a month, but I didn’t measure it in any way. I’ve decided to change it.
Goodreads reading challenge was a game-changer for me. This technique meets the SMART goal requirement. It was specific, measurable and since I know that I read about 10 books a year, I could set a target that was also achievable. I’ve decided to commit to reading 15 books in the year 2019. In fact, I’ve managed to read 18 books.
From all of those 18 read books, I’ve selected two books that have the biggest impact on my career as a Technical Leader.
Designing Data‑Intensive Applications by Martin Kleppmann is a must-read for every Software Engineer.
If you think that you are not building (or will never build) a distributed system, look at Single-Threaded CPU Performance, for the last ten years. The CPU performance is not increasing so rapidly that it used to. That’s why we should learn how to build complex and distributed systems, using only commodity class machines.
Before I start discussing this book, we need to distinguish between two terms: competence and knowledge.
Competence is about ability, a certain area of skills that allows you to perform well. Knowledge is just only knowing things. In most cases, you can Google for facts you are looking for. Take a look at the following answer at Quora.
This book helps you gain Competence.
Let’s discuss the book itself. To align the level of the reader’s understanding of a more advanced topic, this book introduces fundamental databases or data related algorithms. You may be surprised how many Software Engineers do not know these things.
The most precious thing that you get from this book is a big picture view of how many databases and data processing frameworks work under the hood. Also, you will learn by example how to create distributed systems.
What is more, the author is not focusing only on a single technology or database. He constantly contrasting and comparing different solutions, both used today and that we were using some time ago. If you would want to only read databases documentation to gain this knowledge, you would have to spend far more time reading and analyzing it, that reading that book.
What Got You Here Won’t Get You There by Marshall Goldsmith is a good lecture if you work a lot with other people, and obligatory lecture if you want to be a leader or manager.
What you need to know about the author is that he was a coach for many CEOs in multiple companies. That is an impressive achievement and I thought that this book may be valuable. Furthermore, my friend recommended it to me, so it was a must-read for me. What is unusual about this book is that the author focuses on one and only one aspect of leadership: Changing Your behavior.
Dr. Marshall, in the first part of his book, explained 20 common behavioral mistakes, that not only many people do, but most importantly many managers or directors commit. I constantly asked myself: “What I would do If I were in that situation”. Realizing that I’m not perfect, was an important discovery for me, which forced me to change.
The second part of this book, helps you discover your weaknesses and support you to change your behavior. It presents many techniques, most of which may be implemented right away.
On the other hand, this book is sometimes boring. It spends too much talking about the same issue or repeating itself. It could be more condensed.
Currently, I preparing my reading list for 2020. It will be only 12 books, but I want to select the best books that I can read. That is why I would welcome any suggestions from you.
If you are a Software Engineer, I would like to know your reading habits. Do you read more or less? What do you read? Can you recommend something? Leave a comment :)
]]>Agile promises to deliver solutions through collaborative effort, cross functional team design, modern programming methods and probably many more things. Because of that, it is hard to distinguish between part and parcel of Agile, and optional techniques that were developed over time. Agile IT Organization Design by Sriram Narayan is a bird’s eye view on Agile topics and tries to organize them.
As the author mentions multiple things that are somehow related to Agile, starting with project estimations, through project finance and software development practices, ending with team room layout, the reader should be ready to jump over the wide variety of topics. Putting so many subjects in one book causes that specific aspects are discussed from time to time without any case-studies or detailed guidance how to kick start those ideas. It is valuable if you want to get a general idea, but mediocre if you want to get in-depth knowledge. There is nothing to worry about, because there are a lot of references to other publications or books, which may help you to explore more.
The good thing is that at the end of each chapter there is a summary, which after short skimming, helped me to create my own book reading order.
Author very often, explain reasons for doing things in a certain fashion. This approach helped me to reflect what and how I did in the past, and hopefully, it will help me make better decisions in the future.
This book is written for people from all walks of life. You can read it if you are a leader, a product owner, a software engineer or a stakeholder. It would be favourable if every person read that book, but it is not possible. If you work in a modern organization and you have the general understanding of bolts and nuts of Agile processes, you will probably not lose much by skipping that book. It is a good book for someone who would like to take a break from technical books and read something without code listings inside. Also, I would recommend this book to reach out for a particular chapter which are in the area of your interest.
I have highlighted over 50 sentences in this book. Some of them to remember, other to think about or to discuss them with colleagues. I have selected 9 that stuck in my mind:
I’ve been using TDD technique for a few years. Most of the time with satisfactory a result. But it wasn’t an easy journey; it was a trip full of ups and downs. During this period my thinking about TDD has changed dramatically, or maybe I have changed my perception of testing and software development during this time? Indeed, yes I have.
Lasse Koskela in his book called “Test Driven: TDD and Acceptance TDD for Java Developers.” wrote that “TDD is a technique that evolves together with the practitioner.” In this blog post, I would like to describe my own evolution in this matter.
You begin your journey with TDD. When you are new into something, you want to follow the rules strictly. One is TDD circle, which is “RED, GREEN, REFACTOR”. You have also heard about three laws of TDD defined by Uncle Bob:
You are very confused about TDD because all examples that you can find relate to some mathematical/algorithmic problems. But in your daily job, you are obligated to write features, talk to DB and the other systems. You probably are struggling with complex dependencies, maybe you have to use mocks.
But finally, after some practice you start to see the benefits, which are:
Task lists work perfectly for me. When I implement a business requirement, each small step or each corner case is represented by one task in my task list.
Then for each task I write one test, often I use parametrized tests to extend tests quickly. Finally, after a few TDD circles, my task is finally completed, and I can move on.
But sometimes during my work new system requirements appear. Often because the domain is so complicated that it’s hard to predict all the functionality up front. There is a big temptation to do it now, during the work on the current task, but it is dangerous. By doing it, you can lose your focus on your current goal.
I’ve practiced the habit which consists of adding this new requirement as a new task to my task list and complete it after the current one. Then you gain some time to think about this need, to decide if it is an essential functionality to do.
At some day, you will discover Behaviour Driven Development. For example, look at this specification:
1 | Scenario: Customer has a broker policy so DOB is requested |
It is a very well written test scenario. Moreover, it is an executable scenario. This text can be executed with the tool called Cucumber. You don’t have to use it. You can use standard test framework and write your test using fluent test libraries or you can build your fluent API for tests if needed.
Start writing tests that will not only check your code but also be valuable documentation for your system.
Show me your tests and I will tell you everything about your code.
TDD sometimes can also mean “Test Driven Design”. When you start thinking about it, your main reason for writing the tests is to refactor and re-engineer your codebase freely. For me, it is the highest value which you can get from TDD. How to achieve it? Try not to “cement” your code. Try to test interfaces or facades but not bolts and nuts of the implementation.
How to check if your tests are correct? Remove production code and try to rebuild it in a different way basing only on tests.
In this article, I presented fundamental rules of TDD. The topic of requirements were also discussed. In the end, I told you about Test Driven Design which for me is a valuable part of this technique. I hope that your understanding of TDD will improve and you will start writing better tests and better systems.
I gave a speech about TDD. Slides available at tdd.lewandowski.io
Photo credits: Banner, Thumbnail
]]>1 | FOO1=BAR1 |
When you try to evaluate, this file using source
command, you get an error with the fish shell.
1 | $ source web.env |
This is very annoying, so I’ve decided to write a function that can read this file and load those variables. Here is how you can use it:
1 | $ posix-source web.env |
The source code of this function. Enjoy:
1 | $ cat .config/fish/functions/posix-source.fish |
Photo credits: Banner, Thumbnail
]]>Each of these books was good. But “The Decision Maker” is a game changer and I can’t stop thinking about this book. It was worth reading - for sure. I’ve decided to write a short book review and note the most important facts that I’ve learned from this book.
This book is a story about a company and its new owners who have left the corporation and decided to build a great place to work. It is full of dialogues, issues, and situations.
By observing those scenes, the author presents ideas and values that matter when you have to lead the team or the company.
Is this book only for managers or bosses? Certainly not. If you work with other people or deal with non-trivial tasks, this book is for you. For me, it is an appropriate supplement for any “Agile” book.
Blueprint presented in this book is a good starting point for setting up company culture.
The story did not take place in reality. Each scene looks genuine, but as a whole, it seems artificial. Like a romance from 90’s, when you know they will live happily ever after.
To begin with, you have to change your thinking about other people.
People:
Among some people, you can see those values. Among others, you have them hidden, and you have to unlock them.
But there is always somebody who disagrees with it and this is important to remember it. Do you see any similarities with Theory X and Y employees?
Secondly, you have to choose the Decision Maker. It is a person who makes a decision. How to find them? It is simple.
The Decision Maker is a person, who is closest to the action. Bosses or leaders are not often deeply familiar with the situation. Usually, team members are often closer to the problem.
The Decision Maker has to be capable of listening and understanding other people. Making a decision is a process, in which you have to talk and listen to the others.
The Decision Maker should be aware of what is going on. Awareness of facts and consequences is crucial. If the person does not have basic data for making decisions - like company current finance status - you are responsible for unlocking that data.
Wisdom and knowledge are desirable qualities of that person.
It is a leader’s job to choose the Decision Maker. The leader should also observe and monitor the Decision Maker to see if he makes good decisions. If not, something should be done by the leader.
It turns out that your employees’ decisions are often as good as or even better than yours can ever be.
People who are allowed to make the decision feel the ownership, because of that they will do everything to make the best possible decision.
The purpose of the advisory process is to look for a wider perspective.
The Decision Maker should ask at least a few people what they think about the decision.
He or she should ask:
But the Decision Maker should take the final call.
The decision maker process is not a silver bullet. It is only one tool or technique. The bigger picture is not straightforwardly visible in the book.
Between the lines, you can see many behaviours and dialogues which look familiar in “Teal Organizations”. If your organization is not ready, the decision maker process is definitely not the road to follow.
]]>This post is organized in five steps. Each step represents one aspect of the issue and it is also related to one commit in example project repository.
This tutorial is based on Spring Boot
version 1.3.1.RELEASE
with spring-boot-starter-web
. It uses jackson-datatype-jsr310
from com.fasterxml.jackson.datatype
in version 2.6.4
, which is a default version of Spring Boot
. All of these is based on Java 8.
In the example code repository, you can find one HTTP service made with Spring Boot
. This service is a GET
operation, which returns a class with Java Time objects.
You can also find the integration test that deserializes the response.
I would like to return class Clock
, containing LocalDate
,LocalTime
and LocalDateTime
, preinitialized in constructor.
1 | public final class Clock { |
Response class is serialized to JSON Map, which is a default behaviour. To some extent it is correct, but ISO formatted Strings in response are preferable.
1 | { |
Integration testing is an appropriate way to test our functionality.
1 | ResponseEntity<Clock> resp = sut.getForEntity("http://localhost:8080/clock", Clock.class); |
Unfortunately, tests are not passing, because of deserialization problems. The exception with message is thrown can not instantiate from JSON object
.
First things first. We have to add JSR-310 module. It is a datatype module to make Jackson recognize Java 8 Date & Time API data types.
Note that in this example jackson-datatype-jsr310
version is inherited from spring-boot-dependencies
dependency management.
1 | <dependency> |
Response is now consistent but still, not perfect. Dates are serialized as numbers:
1 | { |
We are one step closer to our goal. Tests are passing now because this format can deserialized without any additional deserializers.
How do I know?
Start an application server on commit Step 2 - Adds Object Mapper
, then checkout to Step 1 - Introduce types and problems
, and run integration tests without @WebIntegrationTest
annotation.
ISO 8601 formatting is a standard. I’ve found it in many projects. We are going to enable and use it.
Edit spring boot properties file application.properties
and add the following line:
1 | spring.jackson.serialization.WRITE_DATES_AS_TIMESTAMPS = false |
Now, the response is something that I’ve expected:
1 | { |
Imagine one of your client systems does not have a capability of formatting time. It may be a primitive device, or microservice that treats this date as a collection of characters. That is why special formatting is required.
We can change formatting in response class by adding JsonFormat
annotation with pattern parameter. Standard SimpleDateFormat rules apply.
1 |
|
Below there is a service response using custom @JsonFormat
pattern:
1 | { |
Our tests are still passing. It means that this pattern is used for serialization in service and deserialization in tests.
There are situations where you have to resign from ISO 8601
formatting in your whole application, and apply custom made standards.
In this part, we will redefine format pattern for LocalDate. This will change formatting of LocalDate in every endpoint of your API.
We have to define:
DateTimeFormatter
with our pattern.Serializer
using defined pattern.Deserializer
using defined pattern.ObjectMapper
bean with custom serializer and deserializer.RestTemplate
that uses our ObjectMapper
.Bean ObjectMapper is defined with annotation @Primary
, to override default configuration.
My custom pattern for LocalDate
is dd::MM::yyyy
1 |
|
Definitions of serializer and deserializer for all LocalDate classes:
1 | public class LocalDateSerializer extends JsonSerializer<LocalDate> { |
Now, the response is formatted with our custom pattern:
1 | { |
When we define custom serializer, our tests start to fail. It is because RestTemplate knows nothing about our deserializer. We have to create custom RestTemplateFactory that creates RestTemplate with object mapper containing our deserializer.
1 |
|
Custom formatting Dates is relatively simple, but you have to know how to set up it. Luckily, Jackson
works smoothly with Spring
. If you know other ways of solving this problem or you have other observations, please comment or let me know.
Before you start your journey with Clojure:
For many people Clojure brackets are reasons to laugh. Jokes like that were funny at first: “How many brackets did you write today?”
I have to admit, that at the beginning using brackets was not easy for me. Once I’ve realized that the brackets are just on the other side of the function name, everything was simple and I could code very fast.
After few days I’ve realized that this brackets structure forces me to think more about the structure of the code. As a result the code is refactored and divided into small functions.
Clojure forces you to use good programming habits.
Clojure is homoiconic, which means that the Clojure programs are represented by Clojure data structures. This means that when you are reading a Clojure code you see lists, maps, vectors. How cool is that! You only have to know few things and you can code.
Because Clojure code is represented as data structures, you can pass data structure (program) to running JVM. Furthermore, compiling your code to bytecode (classes, jars) may be eliminated.
For example, when you want to test something you are not obligated to start new JVM with tests. Instead you can just synchronize your working file with running REPL and run the function.
Traditional way of working with JVM is obsolete.
In the picture above, on the left you can see an editor, on the right there is running REPL.
The same way you can run tests, which is extremely fast. In our project we had ~80 tests. Executing them all took about one second.
Simplicity is the ultimate sophistication.
After getting familiar with this language, it was really easy to read code. Of course, I was not aware of everything what was happening under the hood, but consistency of the written program evoked sense of control.
When data structure is your code, you need to have some additional operators to write effective programs. You should get to know operators like ‘->>’, ‘->’, ‘let’, ‘letfn’, ‘do’, ‘if’, ‘recur’ …
Even if there is a good documentation (e.g. Let), you have to spend some time on analyzing it, and trying out examples.
As the time goes on, new operators will be developed. But it may lead to multiple Clojure dialects. I can imagine teams (in the same company) using different sets of operators, dealing with the same problems in different ways. It is not good to have too many tools. Nevertheless, this is just my suspicion.
I’ve written a function that rounds numbers. Despite the fact that this function was simple, I wanted to write test, because I was not sure if I had used the API in correct way. There is the test function below:
1 | (let [result (fixture/round 8.211M)] |
Unfortunately, tests were not passing. This is the only message that I received:
1 | :error-while-loading pl.package.calc-test |
Great. There is nothing better than a good exception error. I’ve spent a lot of time trying to solve this, and solution was extremely simple.
My function was defined with defn-
, instead of defn
. defn-
means private scope and test code, could not access testing function.
Assertions can be misleading. When tested code does not work properly and returns wrong results, error messages are like this:
1 | ERROR in math-test/math-operation-test (RT.java:528) |
I hadn’t got time to investigate it, but in my opinion it should work out of the box.
It is a matter of time, when tools will be better. Those problems will slow you down, and they are not nice to work with.
The Clojure concurrency impressed me. Until then, I knew only standard Java synchronization model and Scala actors model. I’ve never though that concurrency problems can be solved in a different way. I will explain Clojure approach to concurrency, in details.
The closest Clojure’s analogy to the variables are vars
, which can be created by def
.
1 | (defn a01 [] |
We also have local variables which are only in let
scope. If we re-define scope value of amount, the change will take place only in local context.
1 | (defn a02 [] |
The following will print:
1 | 100 |
Nothing unusual. We might expect this behavior.
The whole idea of concurrent access variables can be written in one sentence. Refs ensures safe shared access to variables via STM, where mutation can only occur via transaction.
Let me explain it step by step.
Refs (reference) is a special type to hold references to your objects. As you can expect, basic things you can do with it is storing and reading values.
STM stands for Software Transactional Memory. STM is an alternative to lock-based synchronization system. If you like theory, please continue with Wikipedia, otherwise continue reading to see examples.
1 | (defn a03 [] |
In the second line, we are creating reference. Name of this reference is amount
. Current value is 10
.
In the third line, we are reading value of the reference called amount
. Printed result is 10.
1 | (defn a04 [] |
Using ref-set command, we modify the value of the reference amount to the value 100. But it won’t work. Instead of that we caught exception:
1 | IllegalStateException No transaction running clojure.lang.LockingTransaction.getEx (LockingTransaction.java:208) |
1 | (defn a05 [] |
To modify the code we have to use dosync
operation. By using it, we create transaction and only then the referenced value will be changed.
The aim of the previous examples was to get familiar with the new operators and basic behavior.
Below, I’ve prepared an example to illustrate bolts and nuts of STM, transactions and rollbacks.
Imagine we have two references for holding data:
source-vector
containing three elements: “A”, “B” and “C”.destination-vector
.Our goal is to copy the whole source vector to destination vector. Unfortunately, we can only use function which can copy elements one by one - copy-vector
.
Moreover, we have three threads that will do the copy. Threads are started by the future
function.
Keep in mind that this is probably not the best way to copy vectors, but it illustrates how STM works.
1 | (defn copy-vector [source destination] |
Below is the output of this function. We can clearly see that the result is correct. Destination vector has three elements. Between Sucessful write
messages we can see that there are a lot of messages starting with Trying to write
.
What does it mean? The rollback and retry occurred.
1 | (l/a06) |
Each thread started to copy this vector, but only one succeed. The remaining two threads had to rollback work and try again one more time.
When Thread A
(red one) wants to write variable, it notices that the value has been changed by someone else - conflict occurs. As a result, it stops the current work and tries again whole section of dosync
. It will try until every write operation succeed.
Cons:
dosync
section has to be pure, without side effects. For example you can not send email to someone, because you might send 10 emails instead of one. Pros:
There is a lot that Java developers can gain from Clojure. They can learn how to approach the code and how to express the problem in the code. Also they can discover tools like STM.
If you like to develop your skills, you should definitely experiment with Clojure.
]]>Software developer skill set should not be limited to hard programming skills. Also, important for of our work is communication, problem understanding, self-sufficiency and other soft skills.
It this blog post, I would like to show the positive aspects for software developer who participated in agile like conference. You will also find here a lot of information about Stretch Con, on which my experience is based on.
As a conclusion, I’ll present what outcome Stretch conference had on me.
There are many Agile/Lean/Leadership conferences, Thus, you do not have to choose Stretch. Look around for upcoming events, meetups or trainings. There is always something going on. But in this post I’ll only focus on Stretch.
On my regular basis I am software developer and at least half time of my job I spend on programming. My contribution in a company is not strictly related to any management role.
I wanted to go to Stretch Con, because I’ve belived that:
Stretch was different, unlike any other conference I have attended, mainly because of the topic but also because of the fact that it forced me to think and interact.
Open spaces were great. Topics were shaped dynamically (voted via sli.do). It was a place where you could directly see, that other people, from different companies (different countries) have the same problems!
Those open spaces had a form of brainstorming ideas, where everyone throws an possible solution to the problem. It gave us the possibility to share and discuss ideas.
During the open space time, something unexpected happened. Discussion panel with Joseph Pelrine emerged. It started naturally, and eventually a lot of people accumulated around him. Gathered people asked Joseph questions, a he responded with deep explanation. Discussion was about:
And many more topics, but I was not able note everything. For such moments, it was worth going there.
In my opinion, on average every second talk was worth watching. I think it is a good score for single track conference. Listening to ‘leaders’, was priceless. Wide variety of subjects, helped me realize that this topic is huge.
As usual, I would like to recommend 3 presentations which are worth seeing, but there are many more that migh interest you. Visit Ustream channel to watch them.
After a great introduction, James presented detailed knowledge about the habits. He showed us techniques for shaping habits. Explained habit triggers. Finally, he also presented tricks how to sustain our habits.
You can read more about habits at James Clear Page.
Conference video available here.
It was a presentation, that forced reflection about me. It helped me tu understand who is a leader, what is the team or company and how all of it this fits into our world. Most importantly, how to ‘get’ a goatherd.
Conference video available here.
Joseph started his lecture with explanation of Complex systems. Then, he started discussing social self organization. Among many concepts that he presented, one particularly stuck in my mind. You have to setup for good thing to hapen naturally. Then you have to monitor them, and decide what to do more and what to stop. If you want to know how to setup things, you have to watch video.
Conference video available here.
I’ll remember this event as something positive. Here is my summary.
Pros:
Cons:
I’ve made 464 lines of notes, from the whole conference. There were also official notes, in case I missed something. Great concept and great drawing.
I was there with Wojtek. After the conference we have spend 3 hours talking and discussing Stretch content. We’ve managed only to discuss only about few talks - there were a lot of material presented there.
On my way home, I’ve written down few action points, reflections about myself, that I will try to develop in the upcoming weeks.
In conclusion, I highly recommend attending this kind of event from time to time to every software developer. Surely, you will come out as a different person.
]]>I had to implement algorithm that depends on the current date. Core information for this algorithm is number of days between current date and some date in the future, expressed in days.
Therefore, there is a call somewhere in the code:
1 | (. java.time.LocalDate now) |
For the tests to be stable, I had to make sure that this call always return the same day.
I’ve decided to extract creation of the current date functionality to the function:
1 | (defn now-date [] (. java.time.LocalDate now)) |
During tests I’ve declared different function:
1 | (defn fixed-date [] (. java.time.LocalDate of 2015 01 28)) |
Passing function that creates a current date, solved the problem. It worked great, but it had the following disadvantages:
Having a function, that returns a current time, I’ve decided to find a way of overwriting its definition in tests. I’ve found out that there is operation called with-redefs-fn
, which allows re-defining the function temporarily in the local context. Having defined fixed-date function, block of code looks like this:
1 | (deftest my-test |
fixture/now-date
is a reference to function that I wanted to replace. This time I was amazed by language possibilities. But there was one more problem to solve. I did not want to use java notation.
There is a library called Clj-time. It wraps Joda Time library and makes Clojure code more friendly. I wanted to hold on Java 8 library, but I did not see any alternatives.
So I replaced (. java.time.LocalDate now)
to (t/now)
and also creation of fixed dates, and then I came up with an idea.
Maybe should I replace the Clj-time itself? My production code will be simpler and the test code will be simpler too!
1 | (deftest my-test |
This is my final solution. I am still impressed how easily it that can be done.
I use Clojure for a week. If you have any other ideas how to solve this problem comment, let me know.
Photo credit: Vintage alarm clock, Thumbnail
]]>Main character of this book is a agile coach. He is hired by a company to help with underperforming team - the Dream Team. The book is sliced into about 250 chapters. In addition, every chapter is a new chance to learn something new. I’ve marked 26 notes, related to:
Most significantly, you can clearly see all these techniques in the same order as in Spock testing framework
You can find it, in a book category on amazon. But it is really a book? I do not think so. It was more like game for me.
Every ~10 chapters you have to made a decision. Every decision you made may lead to fail or success of your mission. At first I was confused about this approach and I came back to previous choices and where they lead me.
After reading a about 50 pages I decided to draw a chapter graph (with decision), and come back to paths that I not chosen when I read the whole book.
As a result, I’ve managed to made only good choices and lead the team to happy ending. The graph was on five A4 pages. It looks like this:
In contrast to good choices, failure paths showed me where the team my end. Learning from someone else mistakes is most important lesson for me.
I would like to thank Wojtek Erbetowski for recommending me this book.
Photo credit: Nightmare
]]>Dużo moich znajomych zauważyło, że zmieniam komputery Apple jak rękawiczki. Często proszą mnie o opinię jaki komputer Apple kupić, zwłaszcza gdy nie mają luźnych kilku tysięcy na nowy sprzęt. Najczęściej chcą kupić coś używanego, na próbę. Postanowiłem więc napisać krótki przewodnik po świecie używanych komputerów Apple, które można kupić w Polsce.
Nigdy nie sięgnąłbym po komputery Apple, gdybym nie potrzebował komputera, który pozwoli skupić mi się na tym co naprawdę lubię i chcę robić. Uwielbiam bawić się Linuxem. Mnogość narzędzi i możliwości tych systemów sprawiały, że mogłem godzinami ciągle coś zmieniać, poznawać wiele różnych dystrybucji. W pewnym momencie stało się dla mnie jasne, ze potrzebuję komputera, którego nie będę mógł ‘popsuć’, którego będę mógł traktować jako infrastrukturę (która jest niezawodna, niezawracająca głowy).
Rozwiązaniem było tylko Apple. Natywne narzędzia *nixowe, połączone z rzeczami, które zawsze powinny działać.
Swoją przygodę z Makami zacząłem w roku 2006 od komputera iBook G4 z systemem OS X Tiger. Był to świetny komputer, ale niestety maszyna się zestarzała, a nowy komputer był poza moim zasięgiem ze względu na cene. Wobec, tego na początku był biały MacBook, potem Unibody MacBook, MacBook Pro, MacMini, MacBook Air, iMac, drugi MacBook Pro, drugi MacMini.
Wszystkie te komputery oprócz ostatniego jednego MacBook Pro były używane. Zdobyłem trochę doświadczenia w selekcjonowaniu ofert, ale mimo to kilka razy się sparzyłem.
Poszukiwania skupiam na trzech głównych miejscach:
Komputer powinien być dla każdego. Komputer bez problemu powinien radzić sobie z mnogością otwartych stron, programem pocztowym, komunikatorem internetowym, kilkoma dodatkowymi programami.
Żeby poprawnie zinterpretować ten artykuł wymagana jest podstawowa znajomość budowy komputera (dysk twardy, procesor)
Jeżeli jesteśmy hobbystami komputerowymi, operacje wymiany niektórych podzespołów komputera (dysk twardy, pamięć ram, bateria) możemy przeprowadzić sami. Jeżeli nas to nie interesuje, zalecam oddanie sprzętu do serwisu. Koszty nie powinny być wysokie, gdyż to są operacje rutynowe
Do przeprowadzenia tych czynności, musimy mieć podstawowe zdolności manualne, umiejętność czytania instrukcji iFixIt, zestaw śrubokrętów za kilkadziesiąt złotych i wiare w siebie. Przy wykonywaniu czynności serwisowych, pamiętajmy o odłączeniu zasilania, nie dotykaniu kondensatorów, gdyż mogą być ciągle naładowane, oraz o ‘rozładowaniu się’ np. poprzez dotknięcie kaloryfera.
Oprócz nazw produktu np. iMac lub MacBook, przyjęło się, że rozróżniamy komputery poprzez złożenie daty zaprezentowania określone za pomocą słów (Early, Mid, Late), roku oraz modelu np. MacBook Mid 2012. Każdy produkt ma oczywiście identyfikator modelu, z kolejnym numerem wersji np. MacBookPro9.2, oraz bardziej szczegółowy numer produktu. Do poprawnej identyfikacji sprzętu wystarczy oznaczenie modelu z datą.
O wszystkim tych modelach można przeczytać na wikipiedii. Lektura z którą warto się zapoznać przed zakupem komputera.
Zazwyczaj rozmowa zaczyna się od pytania: “Jaki komputer powinienem kupić?” Zazwyczaj odpowiadam, że do wyboru mamy
Komputery stacjonarne:
Komputery przenośne:
Nie wymieniłem komputerów MacPro oraz MacBook 17”. Jeżeli wiesz, że takiego potrzebujesz, to na pewno się nim zainteresujesz.
Podane ceny są cenami orientacyjnymi. Niewykluczone, że znajdą się oferty tańsze, jak i droższe, które będą zapewne stosownie uzasadnione.
Wymienione wyżej ceny tracą na aktualności z każdym dniem. Za rok, czyli jesień 2016, ceny na pewno się zdezaktualizują.
Z mojej obserwacji wynika, że oferty możemy podzielić na 3 kategorie:
Proces zakupu warto rozłożyć na kilka tygodni. Warto przejrzeć oferty raz na tydzień. Dopiero po 3-4 tygodniach radzę zdecydować się na zakup.Szczególną ostrożnością trzeba się wykazać przy zakupie komputera oferowanego przez komisy. Często takie komputery trafiają do nas z zagranicy, są w słabym stanie technicznym, często były eksploatowane bardzo intensywnie, nie wiadomo czy nie mają ukrytych uszkodzeń. Nie traktujmy jednak tego tak, że ktoś chce nas oszukać, a raczej postarajmy się zrozumieć jaką historię ma sprzęt i czego możemy się spodziewać. Na szczęście sprzedawcy zapewniają ‘gwarancje rozruchowe’.
Obserwujemy oferty sprzedaży jakiejś serii komputera np. MacBook Pro 13”. Większość modeli oferowanych będzie starszych, tylko kilka ofert będzie dotyczyło modeli nowszych, mimo że będą w podobnych cenach. Starsze modele będą długo obecne w ofercie, dopóki sprzedawca nie zejdzie z ceny. Nowsze modele, zazwyczaj szybko się sprzedają.
Kiedyś w większości laptopów można było wymienić procesor na mocniejszy, tak jak robi to się w stacjonarnych komputerach PC. Teraz sprzedawane są komputery w których jedyne co możemy zrobić, to oczyścić wentylator z kurzu. Oczywiście konsumenci oczekują komputerów, mniejszych, lżejszych itp. To jest cena jaką trzeba zapłacić.
W produktach Apple zawsze był widoczny trend, żeby użytkownik nie miał potrzeby dostawania się do środka komputera. Tylko serwisanci powinni widzieć wnętrze komputera. Oczywiście jest wiele dyskusji na ten temat. Nie zacznę kolejnej. Moim celem jest jedynie zaznaczenie niektórych ograniczeń, na które warto zwrócić uwagę przy zakupie komputera.
Komputery, które wymieniłem mają 2 lub 4 fizyczne rdzenie procesora. Jeżeli zastanawiasz się czy wykorzystasz 4 rdzeniowego procesora, możesz spokojnie kupić 2 rdzeniowy procesor. Jeżeli twoja praca wymaga 4 rdzeniowego procesora musisz zawęzić swoje poszukiwania do Mac Mini (niektóre modele), MacBook Pro 15”, iMac (niektóre modele).
Pierwszym czynnikiem na jaki warto zwrócić uwagę to oznaczenie serii procesora:stronie Intela. Powie nam to bardzo dużo o procesorze i jego przeznaczeniu.
Jak dowiedzieć się czegoś więcej o procesorze? Wystarczy podać oznaczenie procesora na stronie ark.intel.com.
Jak porównać szybkości procesora? Zaglądamy na BenchmarkPrzydatne, jeżeli porównujemy kilka modeli komputerów.
Nie polecił bym nikomu zakupu komputera (czy to PC czy Apple), z ilością pamięci mniejszą niż 8GB, bez możliwości rozbudowy. Do żwawej pracy w kilku aplikacjach (lub kilku użytkownikach) potrzeba takiej ilości pamięci RAM.
Zgodzę się natomiast, że da się pracować na komputerze z 4GB pamięci RAM. Należy uważać, żeby nie mieć otwartych 50 kart w przeglądarce internetowej, zamykać niepotrzebne programy. Oczywiście nie wchodzą w grę cięższe aplikacje lub wirtualizacja. 4GB jest w sam raz jeżeli raz na tydzień chcemy sprawdzić maila, a komputera używamy maksymalnie kilka godzin tygodniowo. Ponieważ widzę jak ludzie pracują na komputerach, nie poleciłbym nikomu komputera z 4GB RAM, ponieważ nie chcę słuchać narzekania, że komputer czasami przeżywa chwile słabości.
Jeżeli mamy dysk SSD, 4GB RAM może nam tak bardzo nie doskwierać. W przypadku dysku talerzowego będzie to na pewno bardzo odczuwalne.
W związku z tym przyjmuję zasadę, że skreślam komputery, które nie potrafią obsłużyć 8GB pamięci RAM.
Dyski SSD, charakteryzujące się znakomitą wydajnością, co więcej stały się bardzo tanie. Jeżeli kupujemy komputer z tradycyjnym dyskiem talerzowym, zalecam każdemu wymianę na dysk SSD (oczywiście jeżeli to możliwe). 120GB dysk twardy (kosztujący 300 zł) będzie miał ogromny wpływ na szybkość pracy naszego komputera.
Decydując się na zakup komputera stacjonarnego (iMaka lub Maka Mini) należy pamiętać o klawiaturze. Co prawda zwykła klawiatura będzie działała z Makiem, ale żeby wygodnie pracować, niezbędna jest odpowiednia klawiatura. Polecam te przewodowe. Są wygodniejsze i tańsze. Niestety jest to często dodatkowy koszt około 200 zł. Polecam zakupić nowy produkt.
W większości przypadków, komputery stacjonarne Apple są w dobrym stanie technicznym, gdyż nigdy nie opuszczają biurka. Możemy być spokojni o zalania lub uszkodzenia mechaniczne.
W przypadku Maka Mini mamy pełną swobodę jeżeli chodzi o wymianę pamięci RAM (do 16GB) oraz dysku twardego (każdy dysk twardy 2.5” SATA). Nie dotyczy to ostatniego modelu - Late 2014.
Jeżeli jesteśmy bardzo ograniczeni budżetem, a z komputera będziemy korzystali jedynie do pisania w Wordzie i obsługi poczty. Możemy poszukać Maka Mini plastikowego tj. model 2009. Jednak wydałbym na taki komputer więcej niż 1000 zł i powinien być wyposażony w 8GB RAM i/lub dysk SSD.
Model Mid 2010, jest fenomenem, ponieważ jest to pierwszy model Unibody (aluminiowy), a zarazem ostatni posiadający napęd optyczny. Przed zakupem komputera należy sprawdzić działanie napędu optycznego, bywały one dość awaryjne. Niestety znajdziemy tam procesor z tej samej generacji co w modelu plastikowym. Model ten trzyma jednak bardzo dobrze cenę (1000-1500zł). Płacimy tutaj za aluminiową obudowę.
Przy wyborze Maka Mini, poleciłbym zakup wersji Mid 2011 lub Late 2012, czyli z procesorami i5. Ceny modeli słabo wyposażonych zaczynają się od około 1500 zł. Doposażenie tego komputera w więcej pamięci RAM i dysk SSD pozwoli nam się nim cieszyć przez kolejne kilka lat. Za kwotę do 2000 zł powinniśmy otrzymać komputer, który ma co najmniej 8GB RAM i zamontowane dwa dyski twarde (SSD i HDD) lub jeden większy SSD.
Taki komputer, będzie dobrze trzymał cenę, ponieważ jest to ostatni model, który pozwala na zamontowanie dwóch 2.5” dysków twardych, oraz rozbudowę pamięci do 16GB, co jest niemożliwe w modelu Late 2014.
Na szczególną uwagę zasługują modele z 4 rdzeniowymi procesorami. Oferują one bardzo wysoką wydajność, ale jeżeli pojawią się na rynku, bardzo szybko znikają i kosztują powyżej 2000 zł.
Zakup Maka Mini jest opłacalny cenowo, jeżeli mamy własny monitor. W przeciwnym wypadku, możemy rozważyć zakup iMac 21.5”.
iMaki są niesamowite ze względu na 27” matrycę o rozdzielczości 2560 × 1440.
Model Late 2009 zapoczątkował serię ‘Unibody iMac’. W cenie do 3000 zł powinniśmy kupić komputer w wersji Late 2009 lub Mid 2010 z matrycą w dobrym stanie. Oczywiście model 27”. Niestety są to dość stare i słabe dwurdzeniowe procesory. Nie możemy od takiego komputera oczekiwać zbyt wiele, ale jest to idealna maszyna do pisania.
Model 2011 jest już troszeczkę droższy ponieważ posiada 4 rdzeniowy procesor. Jest to jeden z najpopularniejszych iMaków jaki możemy spotkać na polskim rynku. Ceny sięgają nawet 5000 zł, w mojej opinii to trochę za drogo. Rozbieżność cen jest duża, ponieważ różne są konfiguracje. Od jednego talerzowego dysku twardego i 4GB RAM, do dwóch dysków SSD i 32GB RAM.
Niestety iMac z tego okresu produkcji posiadają jedna wadę fabryczną. Matryca ulega procesowi zakurzenia. Koszt rozebrania matrycy i wyczyszczenia to kilkaset złotych (400-600). Jeżeli właściciel wykupił rozszerzoną gwarancję, jest spora szansa, że usterka została usunięta pod koniec 2 roku gwarancji (prawdopodobnie odbyła się wymiana matrycy na nową). Stan matrycy jest to kluczowy element na który trzeba zwrócić uwagę, przy zakupie takiego iMaka.
W cenie około 2500 zł powinniśmy kupić Unibody iMac 21.5” z analogicznego okresu produkcji. iMac był dystrybuowany razem z myszką i klawiaturą, więc może to być idealny pierwszy komputer Mac w dobrej cenie.
W iMakach wymiana pamięci RAM jest prosta. Otwieramy klapkę z tyłu komputera i wykładamy kości. Natomiast dostanie się do dysku twardego wymaga wymontowania matrycy. Jest to możliwe do zrealizowania w domowych warunkach, ale wymaga już trochę zdolności manualnych. Oczywiście iFixIt naszym przyjacielem jest.
Począwszy od Modelu Late 2012, iMac stał się smukły. W roku 2013 dostał nowe procesory. Dostępność tych komputerów na rynku wtórnym jest niewielka. Zapewne z tego powodu, gdyż są to komputery z bardzo dobrymi parametrami i pierwsi właściciele nie chcą się z nimi rozstawać. Modele slim, to wydatek zdecydowanie powyżej 4000 zł.
W 2014 zaprezentowano 27” model z wyświetlaczem Retina o rozdzielczości 5120 × 2880. Pierwsze używane modele kosztują w okolicy 7000 zł. Jest to około 1000-1500 zł taniej niż nowy komputer.
Należy zwrócić szczególną uwagę na stan laptopa przed zakupem. Należy dokładnie sprawdzić czy nie posiada wgnieceń na rogach. Mogą to być pozostałości po upadku. Przez wgniecenia na rogach, komputery mogą się nie domykać, przez co matryca może się wykrzywiać, mogą być problemy z otwieraniem ekranu. Zdecydowanie nie polecam zakupu takiego komputera, nawet jeżeli jest znacznie tańszy niż inne komputery z tego modelu.
Jeżeli natomiast jesteśmy zdeterminowani, a uszkodzenie wydaje się powierzchowne, należy taki komputer obejrzeć osobiście czy działa sprawnie - przede wszystkim zamykanie i otwieranie. Zalecam również poproszenie właściciela o zdjęcie tylnej klapy komputera. Kiedyś rozważałem zakup takiego komputera, jednak po zdjęciu klapy ujrzałem poklejone na super-glue elementy mocowania głośnika. Zawsze zakładam, że sprzedawca jest uczciwy, a cena powinna być adekwatna do stanu produktu w jakim on jest. Jedyne czego pragnę, to bycie w pełni świadomym co kupuje i za jakie pieniądze.
Należy sprawdzić czy wszystkie klawisze w laptopie stawiają jednakowy opór. W przypadku zalania (np. sokiem), niektóre guziki będą chodziły ciężej, oraz bez charakterystycznego kliknięcia. Nie każdy zwraca na to uwagę, niektórzy mogą uznać to za nieważny detal. Jednak sprzęt który był zalany, mimo wysuszenia, może po pewnym czasie odmówić współpracy.
Jeżeli mamy podejrzenia, że laptop mógł być zalany (mimo, że właściciel wypiera się, że nie był), również możemy poprosić o zdjęcie klapy i o poszukanie śladów zalania. Możemy poszukać, gdzie w danym komputerze znajdują się wskaźniki zalania i je sprawdzić. Jednak nie zawsze jest to możliwe. Jeżeli mam jakieś podejrzenia, że sprzęt był zalany, wolę odpuścić taką ofertę.
Nigdy nie kupuj laptopa, jeżeli ten posiada zamiennik oryginalnego zasilacza. Podróbki są awaryjne. Iskrzą przy podłączania do gniazda.
Jeżeli kupujesz laptop z polskiej dystrybucji, w zestawie powinieneś otrzymać krótką końcówkę zasilacza jak i końcówkę zasilacza z około 1.5 metrowym kablem.
Zwróć uwagę w jakim stanie jest kabel w rejonie wtyczki MagSafe oraz przy wyjściu z zasilacza. Jeżeli jest napuchnięty, lub widoczne są pęknięcia, może to oznaczać konieczność wymiany całego zasilacza, gdyż wymiana samego kabla może być nie opłacalna.
Bateria w kilkuletnim laptopie, spokojnie może mieć kilkaset cykli. Jeżeli ma ponad 500, powinna nam się zapalić żółta lampka, żeby sprawdzić jaka pojemność baterii została. Baterie nie zużywają się równomiernie i ilość cykli nie powinna być głównym wyznacznikiem jej stanu. Jeżeli po kliknięciu na ikonę baterii, widzimy napis “Service battery”, może to oznaczać konieczność wymiany baterii. Jednakże, najlepiej sprawdzić pozostałą pojemność baterii w oknie dotyczącym informacji o komputerze.
Czy wymiana jest prosta i możliwa do wykonania w domowym zapleczu?
W MacBook Air, każdy powinien sobie z tym poradzić. Niestety w Retina MacBook Pro jest już gorzej, gdyż bateria jest przyklejona i trzeba uważać. Warto oddać sprzęt specjaliście.
MacBook Air jest to najlepszy laptop codziennego użytku, jaki możemy kupić na rynku wtórnym. Jest to jednostka mała, z długim czasem pracy na baterii, dobrą matrycą, oraz jak na swoje wymiary, zadowalającym procesorem.
Polecam model Mid 2011 i nowsze. Poprzednie generacje cechowały się dużo słabszymi procesorami oraz mniejszą rozdzielczością matrycy.
Jeżeli głównie będziemy pracowali bez użycia zewnętrznego monitora, wygodniejszy jest model 13”. Jeżeli natomiast dużo podróżujemy, a pracujemy na zewnętrznym monitorze, bezkonkurencyjny jest model 11”.
W tym modelu pamięć RAM jest niemożliwa do wymiany. Wśród 41 Airów jakie znalazłem w ogłoszeniach, tylko 3 posiadały 8GB pamięci RAM. Zazwyczaj były to nowe komputery, często na gwarancji, a więc i ceny oscylowały w okolicy 4000 zł. Natomiast zdarzają się przypadki starszych komputerów z 8GB pamięci RAM i na takie modele warto zaczekać. Będzie to wydatek około 3000 zł. Modele 11” będą tańsze, ze względu na mniejszą popularność.
W przypadku tego modelu szczególną uwagę należy zwrócić na uszkodzenia mechaniczne oraz na stan baterii.
Mając do dyspozycji kwotę około 1000-2000 zł możemy myśleć nad zakupem komputera MacBook Unibody (Late 2008) oraz MacBook Pro (Mid 2009 lub Mid 2010). Ceny różnicują się ze względu na stan oraz na ilość ulepszeń jakie właściciel dokupił do komputera. Droższe modele zapewne mają zamontowane dwa dyski, w tym jeden SSD (możliwość montażu zamiast napędu optycznego) i powinny mieć wymienioną baterię.
Jeżeli jesteśmy bardzo nastawieni na dobrą cenę, możemy zainteresować się modelem Polycarbonate (potocznie określane jako White). Interesują nas wtedy modele od Early 2009. Taki komputer w dobrym stanie możemy nabyć za cenę poniżej tysiąca złotych. Pamiętajmy, że cena jest niższa, gdyż plastik to nie to samo co aluminiowa obudowa.
Są to najstarsze MacBooki jakie wspiera system OS X El Capitan. Jest w związku z tym szansa, że komputery nie będą wspierane przez kolejne wersje systemu operacyjnego. Ponadto, nie możemy oczekiwać rewelacji od takiego komputera. To są prawie 6 letnie konstrukcje. Niemniej, jeżeli wybierzemy zadbany egzemplarz, a nasze potrzeby są podstawowe, może być to bardzo udany zakup.
Model Early 2011 zapoczątkował serię MacBooków z procesorami i5. Te procesory oferują przyzwoitą wydajność, komputer posiada możliwością montażu dwóch dysków, czyni go bardzo atrakcyjnym modelem.
Modele ze standardowym wyposażeniem da się kupić za około 2000 zł. Ceny zadbanych modeli, z lepszym wyposażeniem kończą się na około 3000-3500 zł.
Niestety komputer ma jedną zasadniczą wadę. Rozdzielczość matrycy 1280 × 800 nie zachwyca. Ponadto sam komputer jest gruby i ciężki.
Tutaj musimy sobie odpowiedzieć czy zależy nam na dwóch dyskach twardych, dużej ilości pamięci ram, czy wolimy mobilność modelu Air. W większości przypadków, prawie zawsze polecam model Air.
Model Late 2012 rozpoczął serię komputerów z wyświetlaczem Retina.
Ponadto w odróżnieniu od poprzednika, komputer jest pół kilograma lżejszy, posiada większą baterię i jest smuklejszy.
Niestety bateria jest trudna w wymianie, pamięć ram jest częścią płyty głównej (nie można wymienić) a dysk twardy jest na PCI (brak szerokiego rynku zamienników, tak jak w przypadku dysków 2.5”).
Komputer z 8GB RAM oraz 128GB SSD da się kupić taki komputer w cenie około 3500-4000 zł. Oczywiście im lepsze wyposażenie tym cena droższa, aż dojdziemy do cen nowego komputera.
Warto rozważyć Modele 15” ze względu na matrycę o wyższej rozdzielczości, oraz czterordzeniowy procesor. Czterordzeniowe procesory były dostępne w modelach Early 2011, Late 2011 oraz Mid 2012.
Za 3000 zł, możemy kupić modele z podstawowym wyposażeniem lub delikatnie obtłuczone. Ceny modeli z ostatniego roku produkcji, oraz dobrze wyposażonych sięgają 4500 zł, jednak w cenie poniżej 4000 zł powinniśmy kupić egzemplarz, który zaspokoi najbardziej wybrednych.
Ceny są atrakcyjne, ponieważ większość osób poszukuje modelu 13”.
Na szczególną uwagę zasługują modele z matrycą matową o rozdzielczości 1680 × 1050. Jest to coś na co warto zwrócić uwagę przy zakupie.
Jest to model komputera, który rekomenduje każdemu do pracy na co dzień.
Ta seria komputerów jest łatwa w naprawie i wymianie komponentów. Bez problemu możemy włożyć do komputera dwa dyski twarde, 16GB RAM oraz wymienić baterię.
Jeżeli decydujemy się na zakup takiego MacBooka, musimy być przygotowani na wydatek co najmniej 5000 zł. Tych komputerów jest mało na rynku, ponieważ pierwsi właściciele ciągle są zadowoleni z tych komputerów i nie chcą ich sprzedawać.
Tak jak w przypadku modelu 13” z wyświetlaczem retina, ten komputer nie jest prosty w serwisowaniu. Jeżeli chcemy kupić komputer na lata, zadbajmy o to, żeby posiadał 16GB pamięci RAM i większy dysk twardy.
Nikt nie wie jaką ścieżkę rozwoju wybierze Apple. Czy jakość oprogramowania, systemu OS X będzie rosła czy malała? Czy sprzęt będzie wykonywany z największą starannością, z odpowiednich podzespołów? Czy inżynierowie i projektanci zaprezentują produkty godne uwagi? Tego nikt nie wie.
]]>1 | brew update |
I wanted to try new programming language. A language that is trivial and complex at the same time. Trivial to write fast, complex when struggling with performance or when you want state of art architecture.
This week I decided to try Rust . Rust is an expression-based language. What this mean? In general expression is collection of symbols that jointly express a quantity (or simply expression produce at least one value). But there are also statements which may be smallest standalone elements of an programming language or in other words statements are building blocks of the program.
In Rust everything is an expression, but in general we have two kind of statements. First is an binding statement (for example statement declaration) and second is expression statement, which purpose is to turn any expression into statement (for example adding ; at the end of the line). Why I even write this? Look at this if example which is statement.
1 | let y = if x == 5 { 10 } else { 15 }; // y: i32 |
Notice that there are no semicolons after 10 and 15. This means that those are expression, and ‘if’ is an expression to, which mean that you can assign if result to y.
After reading the first part of the Rust Book I decided to write my first program. I decided to write quick sort. Here is the code.
Lessons learned:
1 | let length = Inches(3); |
Rust is about performance. Many of abstractions are done at compile time. There is said that new programers are fighting with compiler. I can confirm that. Once you gain more experience, it is all becoming easy for you. I have to admit that I’ve similar situation with Scala.
Experience of learning my first programming language came to me when in book pass-reference-by-value was discussed.
Rust take pointers to the whole new level. There are mutable references, boxes and more.
But in fact pointers are an introduction to Rust memory aspect such as: Ownership, Borrowing, Boxes and Lifetimes.
I’ve great time playing around and checking what will work and what not. This is a recommended part of the book and this article for further reading.
For me it was good learning journey to read and try new Language. What will be next? Maybe Go or Haskell.
]]>So why, you should attend Jitter?
You can meet cutting edge technology like Emberlight or Wunderbar.
You can co operate with you new friends to do amazing stuff.
I have an opportunity to take part in “Cardboard Design” workshop, where mentors where Wiesław and his friends.
We were working in groups. Our goal was to visualise music using arduino, few servomotors and of course cardboards. We totally unleashed our imagination. Effects was amazing. We have used 100% of our creativity.
Besides that, event took place in a film studio. Raw and unpolished style. It give me more energy to work. But, there was a few defects. It was loud and cold, but organisers do everything to minimize those effects.
I am definitely looking forward for next Jitter on next year MCE
]]>1 | [ERROR] The git-push command failed. |
This is a situation when we use ssh connections to gerrit. But, when you try to push something to master (ignoring code review), it works! How is that possible?
You probably have in your gitconfig an ssh URL with your user name. But in your project SCM (in pom.xml) you do not have your user name. What user name maven release plugin use? Your computer account name, which is in most cases different than your Gerrit user name.
How to repair it? Define a file in .ssh/config directory with content:
1 | Host gerrit |
There may be a lot of other reason why you have Premission denied
, but this was the hardest I’ve ever seen.
That script was written by some guy on Ubuntu and guess what? GNU versions of those programs (sed and xpath) are not compatible with BSD versions. The script was failing :(
I was trying to improve the script, but forget about it. Just use GNU programs.
To install gnu sed on osx via homebrew type the following:
1 | brew install gnu-sed --with-default-names |
To install gnu xpath on osx via homebrew type the following:
1 | brew tap concept-not-found/tap |