Tuesday, October 30, 2018

Functional Java: Hate the player not the game

We've all heard it over and over, Java 8 is not really functional programming. I admit that features like tail call optimization, closure over the context, the hideous boxed() method call when mapping a primitive type to a boxed type might make functional programming in java a little verbose, and maybe even bitter. But that doesn't prevent Java 8 from providing a set of tools and constructs that produce very elegant solutions. In what follows, I have compiled a (very short) set of very elegant solutions that I've encountered, and I couldn't help but admire their beauty.

Recursive traversal of a tree: by Fahd Shariff

Assuming a tree made of a root node, where each node has associated children. Please admire this one liner:
public Stream <TreeNode<E>> stream() {
         return Stream.concat(Stream.of(this), children.stream().flatMap(TreeNode::stream));

Count the occurrence of specific words in a larger passage: by Ken Kousen

public Map<String,Integer> countWords(String passage, String... strings) {
            Map<String, Integer> wordCounts = new HashMap<>();
            Arrays.stream(strings).forEach(s -> wordCounts.put(s, 0));
            Arrays.stream(passage.split(" ")).forEach(
                word -> wordCounts.computeIfPresent(word, (key, val) -> val + 1));
            return wordCounts;
}

Fibonacci Sequence with memoization: by Ken Kousen

private Map<Long, BigInteger> cache = new HashMap<>();
        public BigInteger fib(long i) {
        if (i == 0) return BigInteger.ZERO;
        if (i == 1) return BigInteger.ONE;
       return cache.computeIfAbsent(i, n -> fib(n - 2).add(fib(n - 1)));
}

Monday, October 22, 2018

Asynchronous Messaging: RabbitMQ Introduction

RabbitMQ is considered today as a stable open source message broker implementation. It is considered by many as the natural evolution of JMS. What is brings to the table is the interoperability of disparate and heterogeneous parties. Indeed, a client in .Net for example can seamlessly exchange messages with a consumer on Java with minimal changes to any of these. It is worth noting that RabbitMQ is build with Erlang, the leader language in telecommunications systems within builtin support for fault tolerance.

RabbitMQ promotes the usage of AMQP (Advanced messaging queuing protocol) as the wire level protocol or network protocol for exchanging messages. It is a binary protocol that deals with the low level details of encoding and marshaling of message contents.

Architecturally, RabbitMQ provides the following advantages:

  • Reliability: Aside from being built with Erlan, RabbitMQ can be configured to persist messages, so that in case of a server crash, all messages can be restored. Additionally, producers and consumers can acknowledge proper reception/delivery of messages.
  • Customized routing: RabbitMQ supports different mechanisms for routing through the use of exchanges, it can for example provide point to point communication through direct routing, selective message delivery, similar to JMS message selectors, so that only events carrying a certain "routing key" get delivered to a queue.
  • Built in support for clustering and high availability: Many instances of RabbitMQ can be logically grouped under a single cluster in order to provide redundancy and ultimately high availability in the case of crashes.
  • Scripting and administration: RabbitMQ provides both a web based console for the purpose of monitoring and administration. In addition, it provides a command line interface to automate its administration through scripts.
  • Versatility: there is a plethora of clients for different platforms/technologies. 


Terminology

Since a picture is worth a thousand words, let's start with a high level schematic from RabbitMQ Documentation


  • Publisher: The party that is at the origin of the message to be sent.
  • Consumer: The destination party that expresses its interest in one or more messages.
  • Message Broker: The messaging solution in this case RabbitMQ. It is made of:
    • Exchanges: The abstraction that describes an intermediate endpoint/stage on the message broker where all messages are delivered first.
    • Queues: The intermediate endpoint where messages are sent from the exchange.
    • Route: Provides a routing strategy to define how and when messages on the exchange should be relayed to the queue. This usually takes the form of a routing key, and follows a binding definition.

In the next installment, we'll have a look at the different types of exchanges and various patterns for exchanging information over RabbitMQ.

Sunday, October 21, 2018

Microservices: Why Asynchronous communications?

The shift towards Microservices entails embracing and assessing different approaches to each and every aspect. Today, we'll dwell on asynchronous communication between microservices or between client and service at large.

Why asynchronous?

Or why stick to synchronous communications should we ask? It is true that synchronous communications has come to be regarded as the De facto pattern to exchange information between any two endpoints. Think of TCP, HTTP, FTP ... . It does indeed exhibit the following advantages:
  1. Simpler reasoning and tracing: Everything starts when an outbound request in made from the client, and we can step through the whole process all the way until the request is processed by the server and a response is sent accordingly.
  2. "Instantaneous" feedback:  When a client breaks the contract, or the server deems the current request as invalid, feedback is sent right away to the client. This can take the form of response codes, redirects, or event stack traces.
  3. Natural: In the sense that we like to think about thing A happens before thing B, or thing C happens as a consequence of thing B.
  4. Translates directly to models: We're all used to sequence diagrams on UML, and very few people can tell from the back of their head how asynchronous calls are modeled. Instead, we like to think of them as a succession of synchronous calls.
I assume the list might be longer, but it fails to address the following concerns:
  1. How should a long running process on the server be presented to the client? If the server is experiencing a high peak in demand, or if the resources are scarce, chances are the client will be affected too. Should it just wait? Should it timeout? How should the client interpret this situation?
  2. When an upstream dependency (server A calls server B) is unavailable, should the client care? Why should the client be sensitive to our ecosystem. From its standpoint, it reached the server, the rest is all internals.
  3. Guaranteed response time: what if I told you that my server will respond in O(1) no matter what. How is that for usability?
  4. What if the client does not need a response right away? Assume that I place my order in the context of an ecommerce application. I know what I ordered, I know how much I paid and I know when and where I will be delivered. Why should I hang in there until the server does all its housekeeping (updating product stocks, notifying the warehouse, ....)
  5. Better yet, you don't know whom you're talking to but you're sure that your request will be honored. I always feel this way when I need an administrative paper. It seems that I always need to find the right person, and more importantly, at the right time. I always wished I could just submit my request to a "mailbox" and just head home and be sure that it will be fulfilled.
  6. My request is more urgent that yours. Now you're in a nuclear power plant, and maintenance staff keeps track of the major events there by posting these events to the monitoring application, business as usual, cooling check... Suddenly, Mr Hero realizes that a major event that jeopardizes the safety of the whole neighborhood is about to take place, and posts an emergency stop payload, now I wouldn't like to be the one who processes a ventilation check while making the emergency stop wait.   
These are some of the pain points that asynchronous communications relieve us from. Long running tasks, unavailable servers, unresponsive servers, server transparency, request reordering and prioritizing... In the next post, we'll see how this can be done.

Monday, October 15, 2018

Specification Pattern in Java

This blog post is mainly inspired by the works of Martin Fowler and Eric Evans in their seminal paper Specifications

I find this pattern geniune as it remedies years of head bashing and hair pulling for finding ways of isolating the evolution of a domain model from the mechanisms involved in inspecting it and querying it while maintaining a high level of encapsulation. I like to think about it as "reflection" on the domain model. Hopefully, things will become clearer as we move proceed.

Let us suppose that we have an ecommerce application with a very exotic product catalog whereby products are represented by a type hierarchy, for example

public class Product{
    String gTIN;
    double price;
    int units;
}

public class Television extends Product{
   ResolutionEnum resolution;
   float screenSize; //in inches
}



Suppose we'd like to search for products matching the following criteria:

  1. The maximum price is 3000 Dh (that is the Moroccan currency dirhams, MAD)
  2. The screen size is more than 12.1''


A naive approach would simply have it done this way

public class SearchProductService{

    ProductRepository inMemoryRepository;
   
    public Product findBy(double maxPrice, float minScreenSize){
 
         List<Product> allProducts = inMemoryRepository.findAll(p-> {p.price < price});
        allProduct.stream().filter(p -> p instanceof Television).map( p-> (Television ) p)
         .filter(p-> p.resolution > minScreenSize).collect(toList());
    }
}

Now suppose we also want to be able to search by resolution and available units. It won't get any better

public class SearchProductService{

    ProductRepository inMemoryRepository;
   
    public Product findBy(double minPrice, float maxScreenSize ){
 
    }

     public Product findBy(ResolutionEnum resolution, int availableUnits ){
 
    }

     public Product findBy(ResolutionEnum resolution, int availableUnits ,double maxPrice, float minScreenSize){
 
    }

    //handle all combinations?
}

Clearly, this solution does not scale, plus it forces us to expose some properties of the product, maybe some that we'd rather keep private. What is even worse is the logic that builds up upon these properties far away from where the properties are located. By all design standards, this solution is a time bomb and a maintenance nightmare. Can we do better?

Of course, we can - we wouldn't be here otherwise :-). The solution has been formalized under the name: specification pattern. Basically, we create our search criteria and  let the product tell you if it meets them or not. Quoting the master Martin Fowler "Tell, don't ask".


class Product{

   public boolean satisfies(SearchCriteria criteria){
       //Open up for extension
       return criteria.isSatisifiedBy(this);
   }

}

class Television extends Product{
//No change here }

/***************    Define Criteria ***********************/
public interface SearchCriteria{
     boolean isSatisfiedBy(Product product);
}

/**************** Composite **************************/
public class Criteria implements SearchCriteria{
     private List<SearchCriteria> criteria ;
 
 
    public Criteria(List<SearchCriteron> criteria){
      this.criteria = criteria;
   }
 
//We could also add the operators add, not, or for combining criteria, here it is AND
   public isSatisfiedBy(Product product)(){
       Iterator<Criteria> iterator = criteria.iterator();
      while(iterator.hasNext()){
          if(!iterator.next().isSatisfiedBy(product))
              return false;
      }
      return true:
     }
}

/****************  Price Criterion  **************************/
public class PriceCriterion implements SearchCritera{
 
    public PriceCriterion(Operator operator, double target){
    //
    }
 
     public boolean isSatisfiedBy (Product product){
                 //Put logic here
    }

}

/***************  Criteria builder ********************/
public class SearchCriteriaBuilder{

     protected List<SearchCriteron> criteria = new ArrayList<>();

 
    private PriceCriteriaBuilder priceCriteriaBuilder;
 
   
    public PriceCriteriaBuilder withPrice(){
        if(priceCriteriaBuilder == null)
            priceCriteriaBuilder = new PriceCriteriaBuilder();
         return priceCriteriaBuilder;
     }

    public PriceCriteriaBuilder and(){
       return this;
   }

     public SearchCritera build(){
           return new Criteria(criteria);
      }

 
    public PriceCriteriaBuilder{
        Operator operator;
          double targetPrice;
         public enum Operator{
                lessThan,
                equal,
                largetThan
               ....
         }
   
         public PriceCriteriaBuilder being(Operator operator) {
                   this.operator = operator;
                   return this;
         }
     
          public PriceCriteriaBuilder value(double targetPrice) {
                   this.targetPrice = targetPrice;
                    PriceCriteriaBuilder.this.criteria.add(new PriceCriterion(operator,targetPrice));
                   return  PriceCriteriaBuilder.this ;
         }
 
    }

}

Phew!!! With all of this behind, let's look how the client code would look like
Criteria criteria = new SearchCriteriaBuilder().
            withPrice()
                       .being(lessThan).value(3000)
            .and()
          .withScreenSize()//could be impletement in the same manner
                        .being(largetThan).value(12.1)
           .build();
Television television = ProductRepository.getTelevisions(); television.satisfies(criteria);


Please admire the conciseness and expressivity of the solution. It is true that we had to put in a lot of code behind, but that's the price if you're willing to use the porcelain instead of the plumbing. Please note that this implementation of the specification pattern is a mashup of several design patters:


  • Composite Pattern: product deals only with the criteria class and it does not change if there is one criterion or many.
  • Visitor Pattern: Each Concerte criteria class has to deal with the specifics of the product, and the interaction with the product is kept simple thanks to one method satisfies(SearchCriteria criteria)
  • Command Pattern: We build the list of criteria and execute them once at the end with a single call satisfies(criteria).
  • Builder Pattern: Allowed the creation of a fluid, progressive API.


I hope that was an interesting read.

Thank you

Wednesday, October 10, 2018

Automate the repetitive stuff: Scoop


Scoop: Command line installer for windows

All right, so you've just finished reading up on a stack, language, tool and want to practice, but don't really wanna take the time to dive into the details of setting up the new environment tools. Fear not, Scoop is here to the rescue.

Scoop is written as a bunch of powershell commands that shield you from the details and locations of the different packages and versions you want to try out.


Installation


All you gotta do is install it by running the following command on Powershell

//Set the perimission rights
Set-ExecutionPolicy RemoteSigned -scope CurrentUser

//install it
iex (new-object net.webclient).downloadstring('https://get.scoop.sh')

//Optional, if you do not have git already
scoop install git

//install all buckets (we'll get to that in a moment)
1scoop bucket known | % { scoop bucket add $_ }


You end up with the following directory in your home directory





type in scoop in your console





Please note that you have to run scoop under powershell, otherwise you might run into


The term 'C:\Users\lhechma\scoop\apps\scoop\current\bin\scoop.ps1' is not recognized as the name of a cmdlet, function, script file, or operable program. Check the spelling of the name, or if a path was included, verify that the path is correct and try again.


Architecture

Layout

Scoop adds itself to the home directory as a powershell scipt under ~/scoop/shims

All its commands are themselves scripts distributes accross three main subfolders






  • bin: The main entrypoint, mainly through scoop.ps1 script
  • lib: Its internal supporting librairies, for example installing shortcuts on the start menu, or parsing json
  • libexec: wich almost mirror the commands you get on the console.

How about buckets?

Scoop uses the term bucket in its literal meaning. It is the isolation and addressing mechanism for addressing the different meta-repositories, which contain configuration details for all the software packages within it. This means that is necessary to find buckets for your desired package if it is not bundled with the default list of buckets that are preconfigured with Scoop. In the example below, I wanted to try out Clojure (it is about time I guess). Since it is not there by default on any bucket.


So I had to add a new bucket with clojure in it

scoop bucket add wangzq https://github.com/wangzq/scoop-bucket

scoop install clojure


This is what I ended up with





Go your console and type clojure


I finally got clojure working, with a maximum of 5 commands and without leaving my console. What a scoop!


Minimal intrusion/Transparency:


Scoop does its magic while doing its best not to pollute your PATH environment with the newly installed packages (sometimes it has to do, like for Java JDK). The trick is to create a new cmd command with the name of the package under ~/scoop/shims. Only this path is added to your user path.

Conceptual integrity:


Scoop manages itself in the same way it manages its own packages. DRY at its best.

Simplicity:


Version management has always been a headache. Think back at ANT, Maven with its optional dependencies,...  Scoop manages package versioning with .... folder shorctus. Basically, a new shim (executable if you will) will point to a version folder, that points to the proper version, so you can have multiple versions of the same package without any issues. The old moto: add one level of inderction does wonders here.

Additionally, scoop is based upon well established and understood technologies: GIT, JSON. Your learning curve is almost inexistant if you've been doing any software development in the last years.

Criticism


Just because I love to have the rebel attitude, I'm going to mention that I would have loved to have less verbositiy at the level of the commands used internally to extract and run installers (like for pre_install and post_install hooks). But again, I guess that interoperability is not required here, since scoop is tailored to Windows, a DSL would do more harm than good.

Comparison

I can't do better than the creator. So I'll let check how Scoop compares to Chocolatey here


See you on the next installent for more production tools.
Thanks for reading





Tuesday, August 8, 2017

Reify your dreams

After long hours of brainstorming and hair pulling, I finally arrived at the second most important decision in my life: what should I name my company? the first most important decision being should I propose to my girlfriend back then. No matter how you decide to tackle the subject; I could have taken Darwin's approach when he was deciding whether to propose to his cousin or not: Take a pencil and a piece of paper and weigh all possible consequences, and only then choose the most gratifying one. So much for an algorithm for happiness. Luckily, my first decision of getting married to that person turned out to be a good one, and it required a leap of faith. That same mindset is the one I tried to establish when deciding on the name of my company when I started freelancing. Isn't coding after all about taking someone else's or one's idea and making it concrete? That is my friends what Reify means exactly according to Merriam-Webster's dictionary. (some computer science aficionados might realize that it also refers to a concept in type theory where a concrete type representation is available at runtime, as opposed to no runtime type representation like in Java generics erasure). There you go, I had to throw in a little bit of technical stuff :-)
But Reify is for me more than a verb, or some computer science jargon. It is filled with emotions and memories. Memories of how supportive my wife was when I was just in the beginning of my career when I abruptly lost my job because of the 2008 financial crisis. And while I'm still emotional (please don't leave at this point), I'm also grateful to my mom who taught me how to excel at reading at age 5, to my father who financed my studies and persuaded me (in his own stubborn way) from switching my major when pursuing my master's degree, not to mention my two boys who teach me every day that simple things in life count for much more than meets the eye.
Reify is also a concretization of more than 10 years of sweat, blood, and tears. Aside from several part time positions during my bachelor's degree as a library technician, teacher assistant, and lab assistant, my first position was in the realm of microcontrollers. Then I moved to Network management solutions, and finally enterprise applications. In spite of the apparent differences, there is one common thread among all of these: requirements and deliverables. Software engineering is all about the efficient management of these, from the detailed level of a method name or member variable to the smile you bring to the product owner when the application runs without major incidents on the production environment.
As I advanced in my career, I realized that I needed less and less to be told what to do. Until one day, it just hit me: why can't I just be my own boss? but the contracting game is, however, not a single variable equation. It is, on the contrary, a multivariable one. And I have to confess that just as businesses look for talent, talent also looks for business. So while each party can fully focus on their end of the deal, TopTal provides the ideal platform for connecting the two and keep it flowing.
To make a long story short (I could go on forever :-)), I am really looking forward to joining Software Developers Community at TopTal, and I hope that this post will help me speed up my onboarding process.



Friday, September 30, 2016

FP in a Nutshell

Yes, admit it, you must be thinking it as you're reading this, yet another post on functional programming. I do not intend here to shed some new light on a heated debate between OO and FP. Instead, I'm gonna pay tribute to those authors who have helped peel FP to its bare bones.
Being a java developer, the starting block could only be  Functional Programming for Java Developers by Dean Wampler.
Conceptually, functional programming is all about:
  • A core set of fundamental data structures, with lists being the common and basic choice, acting as data carriers, in addition to a toolkit to operate on this data. This toolkit is made of combinators (map,fold/reduce, filter) that open the door for virtually endless transformations. 
  • The variability of behavior that used to be materialized through polymorphism in OO takes a different form: lambda functions. With the help of the combinators, you can pass functions to your data holder (taking the form of streams in Java 8), and they are applied on each element in that stream.
  • Declarative programming, can have a multitude of meanings as described by Robert Harper. As I'm still in my functional thinking infancy, I'm inclined to embrace 3 of his definitions (for lack of understanding of the others):
    1. “Declarative” means “high-level”: The yardstick that I will be using here is very subjective, but I find that this heuristic helps me well: The closer the code I write to natural language, the higher level it feels. For example, suppose I have the top football scorers of all history, each one having a number of goals scored in a given season. How can I get a ranking of players by goals scored in decreasing order?
      First, the Player class
      class Player {
              String name;
              int goals;
      
              Player(String name, int goals) {
                  this.name = name;
                  this.goals = goals;
              }
      
              public String getName() {
                  return name;
              }
      
              public int getGoals() {
                  return goals;
              }
          }
      
      The solution
      Stream<Player> allTimeBestScorers = Stream.of(new Player("Luis Suarez", 14),
        new Player("Wayne Rooney", 13), new Player("Fancesco Totti", 14),
        new Player("Didier Drogba", 15), new Player("David Villa", 12),
        new Player("Robbie Keane", 15), new Player("Samuel Eto'o", 16),
        new Player("Zlatan Ibrahimovic", 17), new Player("Lionel Messi", 20),
        new Player("Cristiano Ronaldo", 20));
      
        allTimeBestScorers.collect(
           Collectors.groupingBy( 
            Player::getGoals,
            () -> new TreeMap<>(Comparator.reverseOrder()), 
            Collectors.toList())
         ).forEach((key,value) ->
           System.out.println(key + " goal(s):" + value.stream().map(Player::getName).collect(Collectors.joining(","))) );
      
      The output
      20 goal(s):Lionel Messi,Cristiano Ronaldo
      17 goal(s):Zlatan Ibrahimovic
      16 goal(s):Samuel Eto'o
      15 goal(s):Didier Drogba,Robbie Keane
      14 goal(s):Luis Suarez,Fancesco Totti
      13 goal(s):Wayne Rooney
      12 goal(s):David Villa
      
      First note how succinct the solution in functional style is. Here is my one-to-one correspondence between "high level" and the implementation:
      • group players by -> Collectors.groupingBy
      • group by what? the goals they scored of course -> Player::getGoals
      • in which data structue? A map -> new TreeMap<>
      • top scorers first? no problemo -> Comparator.reverseOrder()
      • finally, how should the players with the same number of goals be arranged? a list is the simplest choice -> Collectors.toList().
      • present the output -> forEach(key,value), it does not really matter if it is not clear what is happening here.

    2. “Declarative” means “not imperative”:  I've been playing around lately with a somewhat classical problem of rotating an array to the left by x positions. In the imperative style, I could come up with 2 possible solutions, one that requires extra space, while the other requires extra computation.
      First the simple solution with a temporary array
      public static IntStream rotateWithExtraSpace(int[] elements, int shift) {
              int effectiveShift = shift % elements.length;
              if (effectiveShift > 0) {
                  int[] temp = new int[effectiveShift];
                  int j = 0;
                  for (; j < effectiveShift; j++) {
                      temp[j] = elements[j];
                  }
                  for (; j < elements.length; j++) {
                      elements[j - effectiveShift] = elements[j];
                  }
                  for (int i = 0; i < temp.length; i++) {
                      elements[j - temp.length + i] = temp[i];
                  }
              }
              return Arrays.stream(elements);
          }
      
      Then the "dreaded" recursive solution, albeit it could be further improved.
          private static IntStream rotateRecursively(int[] elements, int shift) {
              int effectiveShift = shift % elements.length;
              doRotate(elements, effectiveShift, 0, elements.length);
              return Arrays.stream(elements);
          }
      
          private static void doRotate(int[] elements, int shift, int from, int to) {
              if (shift == 0)
                  return;
              for (int j = from + shift; j < to; j++)
                  swap(elements, j, j - shift);
              int translated = ((to - from) / shift) * shift + from;
              int low = to - shift;
              doRotate(elements, (translated - low)%shift , low, to);
          }
      
      If there is one thing to be noted here, is how "low level" these solutions are. If you were not given a context, you couldn't tell what is exactly happening here. I must confess that I had to play hard at times with the different indices to get it right. This is the land of imperative programming: temporary variables, adjustments, mutations. Everything to make your  head spin when you do not get it right.

    3. “Declarative” means “what, not how": I will present here an alternative solution to the array shifting problem. Let me first describe it, then show you how it translates to code. I'm gonna use a stack of queues. The first queue to be pushed onto the stack will filled up to the number of elements to be rotated, then all subsequent elements will be added to a different queue on top of the previous one. So this is the what, and this is it in java 8 
      public static IntStream functionalRotation(int[] elements, final int shift) {
              Supplier<Stack<Queue<Integer>>> supplier = () -> {
                  Stack<Queue<Integer>> stack = new Stack<>();
                  Queue<Integer> queue = new LinkedList<>();
                  stack.push(queue);
                  return stack;
              };
              BiConsumer<Stack<Queue<Integer>>, Integer> accumulator = (stack, element) -> {
                  if (stack.size() > 1 || stack.peek().size() < shift) {
                      stack.peek().add(element);
                  } else {
                      Queue<Integer> shiftedQueue = new LinkedList<>();
                      shiftedQueue.add(element);
                      stack.push(shiftedQueue);
                  }
              };
              BinaryOperator<Stack<Queue<Integer>>> combiner = (left, right) -> {
                  if (left.isEmpty())
                      return right;
                  else {
                      left.addAll(right);
                      return left;
                  }
              };
              Function<Stack<Queue<Integer>>, IntStream> finisher = acc ->
              {
                  Queue<Integer> shifted = new LinkedList<>();
                  while (!acc.isEmpty()) {
                      shifted.addAll(acc.pop());
                  }
                  Integer[] ints = shifted.toArray(new Integer[]{});
                  return Stream.of(ints).mapToInt(Integer::intValue);
              };
              return Arrays.stream(elements).boxed().collect(
                      Collector.of(supplier, accumulator, combiner, finisher));
      
          }
      

      So much for a what not how, I must admit that there is a little bit of both. In my defense, I would say that the how becomes native constructs: the collector, supplier, accumulator, combiner, and finisher. The what is actually captured by what each piece of the puzzle has to provide. I will provide a simple explanation of what each element provides, and I will discuss these in greater detail in a different post.

      •  The supplier gives me the data structure that I will use, a stack of queues as described above.
      • The accumulator tells me what to do when I wish to add an element to my stack.
      • The combiner is used when parallel streams are used, and I can have more than one thread doing the same logic, how can I combine all stacks produced by these different threads.
      • The finisher waits until everything is complete, and then "finishes off" by transforming my stack to an array (or IntStream here).
      The rest is all noise for the sake of this discussion. I hope that the next time you hear about functional thinking, it would hopefully make more sense. Stay tuned for a new episode.
       

Friday, December 25, 2015

Git one liners

Here is my collection of git commands that saved my day, I hope it will also save yours

Q: What's the freaking branch with that commit number?
A: git branch --contains commit#.
Applicability: You're overwhelmed with bugfix branches, alternating between half a dozen, while awaiting for your reviewers' comments. Naturally, you get their feedback and as always, there is  something to make better. Just when you're inclined to do so, you realize that you don't even remember what local branch your commit was in.
What to do if it is not there? Fear not, the twin sister git branch -r --contains commit# is here to the rescue.
Having found your remote branch, now run
git remote show 'remote-repo'
if no local branch is tracking the remote one with your commit, then you must have created it from another git project. That was my problem anyways.

Functional Java: Hate the player not the game

We've all heard it over and over, Java 8 is not really functional programming. I admit that features like tail call optimization, closu...