BeyeNETWORK UK Blogs BeyeNETWORK UK Blogs. Copyright BeyeNETWORK 2005 - 2019 150 31 BeyeNETWORK UK Blogs Orchestral Manoeuvres In December 2018 it was announced that TIBCO, the data integration vendor, was buying Orchestra Networks, the MDM vendor that was originally French but now has over half of its business in the USA. This seems to me a potentially good move for both parties. For Orchestra it gives an exit for its founders, who set the company up way back in 2000. The deeper pockets of TIBCO potentially allow the software to grow faster than as a stand-alone company, provided that TIBCO execute the acquisition well, of which more anon. TIBCO in theory already had some MDM capability from a much earlier acquisition of Velosel in 2005, but in practice they had not integrated this very well into their sales and marketing channels, and their offering had largely disappeared from the market. As Informatica have shown, data integration is a natural companion for MDM, along with data quality, and so TIBCO can bring to the market a richer offering that now covers both data integration and master data management. Orchestra Networks has built up an excellent product (EBX) that regularly comes top of The Information Difference Landscape in terms of technology, and obtains unusually high customer satisfaction scores in our annual survey.

How well this pans out will depend how well TIBCO deal with the acquisition. The technologies are naturally complementary, but as well as plugging the tools together technically it will be important to integrate MDM into the TIBCO sales and marketing channel. Sales staff are often resistant to change, and revert to selling what they understand when they have targets to hit, so there will need to be some education of the sales force about what Orchestra can do for TIBCO. Informatica have done a very good job with their Siperian acquisition in particular (less so with Heiler so far), and MDM is often the lead in many of their larger platform sales these days. TIBCO need to learn from this and similarly put Orchestra’s technology at the heart of their customer offering. It will be important to retain key Orchestra staff, both engineering and consulting. TIBCO also need to lay out a migration path for their existing MDM customers. With these caveats, this seems to me to be an astute purchase and one with the potential to work well for both parties.

]]> Sun, 6 Jan 2019 17:13:54 MST
Negative energy prices and artificial intelligence Since renewable energy has started to become popular, an odd problem has appeared in wholesale energy markets: negative prices.

In other words, energy plants sometimes pay their customers to take energy off their hands. Usually older, less flexible plants that can‘t shut down without incurring costs are affected.
One solution to this problem is batteries. The idea is to store the energy when it is overabundant, and use it later when it is expensive. This is sometimes called "peak shaving".
Batteries are a great idea, but not the only solution. Another is to simply find an application that is energy hungry and can be run intermittently.

One possible application for soaking up excess energy is desalination. For example, a desert region near an ocean could build solar plants to desalinate water during the day only. The question is whether building a desalination plant that only runs 12 hours a day is worth the savings in energy. 
Another way to make use of energy that might go to waste is using it to power computers that perform analytics. The energy demand of data centers is growing quickly.

One source of energy needs is Bitcoin. Bitcoin mining consumes huge amounts of energy, so it is a great example of a use for negative energy prices. In fact there are already a lot of bitcoin miners in Western China, where solar and wind installations have outstripped grid upgrades. In these areas renewable energy is often curtailed because the grid can‘t keep up. So the energy is basically free to the miners.

Extremely cheap bitcoin mining arguably undermines the whole concept, but here is a more productive idea: Training artificial intelligence. For example, have a look at this link to gcp leela, a clone of Google Deepmind Alphago zero:
The entire source code is free, and it‘s not a lot of code. But that free code is just the learning model, and its based on well known principles. It‘s probably just as good as Deepmind Alphago Zero when trained, but they figure it would take them 1700 years to train -- unless of course they could harness other resources. This is partly because they don‘t have access to Google‘s specialized TPU hardware. Whatever the reason, training it is going to burn through a lot of energy.
This would be a great application for negatively priced energy. Game playing is more a stunt than a commercial application, but when they are paying you to use the energy, why not? And as time passes, more useful AI apps will need training.
So it gets down to whether the business model of peak shaving with batteries makes more economic sense than banks of custom chips for training neural networks for AI in batches. The advantage of batteries is that you can sell the energy later for more, but it‘s not terribly efficient, and using it directly is a better idea. Cheap computer hardware and a growing demand for AI may fit this niche very well.
This puts a whole new twist on the idea that big tech companies are investing in renewables. These companies make extensive used of AI, which is trained in batch processes. 

]]> Thu, 4 Jan 2018 08:33:00 MST
Understanding Artificial Neural Networks Artificial neural networks are computer programs that learn a subject matter of their own accord. So an artificial neural network is a method of machine learning. Most software is created by programmers painstakingly detailing exactly how the program is expected to behave. But in machine learning systems, the programmers create a learning algorithm and feed it sample data, allowing the software to learn to solve a specific problem by itself.

Artificial neural networks were inspired by animal brains. They are a network of interconnected nodes that represent neurons, and the thinking is spread throughout the network. 

But information doesn‘t fly around in all directions in the network. Instead it flows in one direction through multiple layers of nodes from an input layer to an output layer. Each layer gets inputs from the previous layer and then sends calculation results to the next layer. In an image classification system, the initial input would be the pixels of the image, and the final output would be the list of classes.

The processing in each layer is simple: Each node get numbers from multiple nodes in the previous layer, and adds them up. If the sum is big enough, it sends a signal to the nodes in the layer below it. Otherwise it does nothing. But there is a trick: The connections between the nodes are weighted. So if node A sends a 1 to nodes B and C, it might arrive at B as 0.5, and a C as 3, depending on the weights in the connections. 

The system learns by adjusting the weights of the connections between the nodes. To stay with visual classification, it gets a picture and guesses which class it belongs to, for example "cat" or "fire truck". If it guesses wrong, the weights are adjusted.This is repeated until the system can identify pictures.

To make all this work, the programmer has to design the network correctly. This is more an art than a science, and in many cases, copying someone else‘s design and tweaking it is the best bet.

In practice, neural network calculations boil down to lots and lots of matrix math operations as well at the threshold operation the neurons use to decide whether to fire. It‘s fairly easy to imagine all this as a bunch of interconnected nodes sending each other signals, but fairly painful to implement in code. 

The reason it is so hard is that there can be many layers that are hard to tell apart, making it easy to get confused about which is doing what. The programmer also has to keep in mind how to orient the matrices the right way to make the math work, and other technical details. 

It is possible to do all this from scratch in a programming language like Python, and recommended for beginner systems. But fortunately there is a better way to do advanced systems: In recent years a number of libraries such as Tensorflow have become available that greatly simplify the task. These libraries take a bit of fiddling to understand at first, and learning how to deal with them is key to learning how to create neural networks. But they are a huge improvement over hand coded systems. Not only do they greatly reduce programming effort, they also provide better performance.

]]> Wed, 3 Jan 2018 15:21:00 MST
Psst – Wanna buy a Data Quality Vendor? Founded in 1993, Trillium Software has been the largest independent data quality vendor for some years, nestling since the late 1990s as a subsidiary of US marketing services company Harte Hanks. The latter was once a newspaper company dating back to 1928, but switched to direct marketing in the late 1990s. It had overall revenues of $495 million in 2015. There was clearly a link between data quality and direct marketing, since name and address validation is an important feature of marketing campaigns. However the business model of a software company is different from a marketing firm, so ultimately there was always going to be a certain awkwardness in Trillium living under the Harte Hanks umbrella.

On June 7th 2016 the parent company announced that it had hired an advisor to look at ”strategic alternatives” for Trillium, including the possibility of selling the company, though the company’s announcement made clear that a sale was not a certainty. Trillium has around 200 employees and a large existing customer base, so will have a steady income stream from maintenance revenues. The data quality industry is not the fastest growing sector of enterprise software, but is well established and quite fragmented. As well as offerings from Informatica, IBM, SAP and Oracle (all of which were based on acquisitions) there are dozens of smaller data quality vendors, many of them having grown up around the name and address matching issue that is well suited to at least a partially automated solution. While some vendors like Experian have focused traditionally on this problem, other vendors such as Trillium have developed much broader data quality offerings, with functions such as data profiling, cleansing, merge/matching, enrichment and even data governance.

There is a close relationship between data quality and the somewhat faster growing sector of master data management (MDM), so MDM vendors might seem in principle to be natural acquirers of data quality vendors. However MDM itself has somewhat consolidated in recent years, and the big players in it like Informatica, Oracle and IBM all market platforms that combine data integration, MDM and data quality (though in practice the degree of true integration is distinctly more variable than it appears on Powerpoint). Trillium might be too big a company to be swallowed up by the relatively small independents that remain in the MDM space. It will be interesting to see what emerges from this exercise. Certainly it makes sense for Trillium to stand on its own to feet rather than living within a marketing company, but on the other hand Harte Hanks may have missed the boat. A few years ago large vendors were clamouring to acquire MDM and related technologies, but now most companies that need a data quality offering have either built or bought one. The financial adviser in charge of the review may have to be somewhat creative in who it looks at as a possible acquirer.

]]> Thu, 9 Jun 2016 13:22:32 MST
Informatica MDM Moves To The Cloud I recently attended the Informatica World event in San Francisco, which drew over 3,000 customers and partners. One key announcement from an MDM perspective was the availability of Informatica MDM for the cloud, called MDM Cloud Edition. Previously Informatica had a Salesforce application only cloud offering via an acquisition in 2012 of a company called Data Scout. This is the first time that the main Informatica MDM offering has been able to be deployed in the cloud, including on Amazon AWS. It is an important step, as moving MDM to the cloud is a slow but inevitable bandwagon and recently start-ups like Reltio, designed from scratch as cloud offerings, have been able to offer cloud MDM with little real competition. The Informatica data quality technology will apparently be fully cloud-ready by the end of 2016.

The company launched a product called Intelligent Streaming. This connects lots of data sources and distributes the data for you e.g. a demo showed data from several sources being streamed to a compute engine using Spark, or Hadoop if you prefer, without needing to code. This approach shields some of the underlying complexity of the Big Data environment from developers. Live Data Map is part of the Informatica infrastructure and is a way to visualise data sources both on premise or cloud. Its also does scheduling in a more sophisticated way than at present, using machine learning techniques.

There were plenty of external speakers, both customers and partners. Nick Millman from Accenture gave a talk about trends in data management, and referred back to his first assignment at a ”global energy company” (actually Shell, where I first met him), in which the replication of an executive dashboard database involved him flying from London to The Hague with a physical tape to load up onto a server in Rijswijk. Unilever gave a particularly good talk about their recent global product information management project, in which the (business rather than IT) speaker described MDM as ”character building” – hard to argue there.

There were new executives on display, in particular Jim Davis as head of marketing (ex SAS) and Lou Attanasio as the new head of sales (ex IBM).
With Informatica having recently gone private, it will be comforting for their customers that the company is investing as much as ever in its core technology, and certainly in MDM the company reckons it has more developers than Oracle, IBM and SAP combined, though such claims are hard to verify. However there certainly seems to be plenty of R&D activity going on related to MDM judging by the detailed sessions. Examples of additional new developments were accelerators and applications for pharmaceuticals, healthcare and insurance.

Informatica continues to have one of the leading MDM technologies at a time when some of its large competitors appear to be losing momentum in the marketplace for assorted reasons, so from a customer perspective the considerable on-going R&D effort is reassuring. Its next major R&D effort will be to successfully blend the two current major MDM platforms that they have (acquired from Siperian and Heiler), something that their large competitors have singularly failed to achieve thus far with their own acquired MDM technologies.

]]> Mon, 6 Jun 2016 13:34:18 MST
Alphago probably isn‘t learning from Lee Sedol There has been quite a bit of discussion about whether Alphago can learn from the games it plays against Lee Sedol. I think not. At least, not directly. 
The heart of the program is the ”policy network” a convolutional neural network (CNN) that was designed for image processing. CNNs return a probability that a given image belongs to each of a predefined set of classifications, like ”cat”, ”horse”, etc. CNNs work astonishingly well, but have the weakness that they can only be used with a fix size image to estimate a fixed set of classifications.
The policy network views go positions as 19í—19 images and returns probabilities that human players would make one of 361 possible moves. This probability drives with the Monte Carlo tree search for good moves that has been used for some time in go computers.The policy network is trained on 30 million positions (or moves) initially. 
CNN (aka ”deep learning”) behavior is pretty well understood. The number of samples required for learning depends on the complexity of the model. A model of this complexity probably requires tes of thousands of example positions before it changes much. 
The number of samples required to train any machine learning program depends on the complexity of the strategy, not on the number of possible positions. For example, Gomoku ("five in a row", also called goban) on a 19í—19 board would take many fewer examples to train than go would, even though the number of possible positions is also very large.
Another point: Any machine learning algorithm will eventually hit a training limit, after which it won’t be able to improve itself by more training. After that, a new algorithm based on a new model of game play would be required to improve the play. It is interesting that the Alphago team seems to be actively seeking ideas in this area. Maybe that is because they are starting to  hit a limit, but maybe it‘s just because they are looking into the future.
So Alphago probably can’t improve its play measurably by playing any single player five times, no matter how strong. That would be ”overfitting”. The team will be learning from the comments of the pro players and modifying the program to improve it instead.
Interesting tidbit: Alphago said the chances of a human playing move 37 in game 2 was 1 in 10,000. So the policy network doesn’t decide everything.

]]> Sun, 13 Mar 2016 13:13:00 MST
Alphago is a learning machine more than a go machine The key part of Alphago is a convolutional neural network. These are usually used for recognizing cat pictures and other visual tasks, and progress in the last five years has been incredible.
Alphago went from the level of a novice pro last October to world champion level for this match. It did so by playing itself over and over again.
Chess programs are well understood because they are programmed by humans. Alphago is uses an algorithm to pick a winning move in a given go position. But the heart of the program is a learning program to find that algorithm, not the algorithm itself.
Go programs made steady progress for a decade with improved tree pruning methods, which reduce the total number of positions the program has to evaluate. The cleverest method is Monte Carlo pruning, which simply prunes at random. 

]]> Sun, 13 Mar 2016 06:00:00 MST
Informatica V10 emerges Informatica just announced their Big Data Management solution V10, the latest update to their flagship suite of technology. The key objective for this is to enable customers to design data architectures that can accommodate both traditional database sources and newer Big Data ”lakes” without needing to get swim too deeply in the world of MapReduce or Spark.

In particular, the Live Data Map offering is interesting, a tool that builds a metadata catalog as automatically as it can. Crucially, this is updated continuously rather than being a one-off batch exercise, the bane of previous metadata efforts, which can quickly get out of date. It analyses not just database system tables but also semantics and usage, so promises to chart a path through the complexity of today’s data management landscape without need for whiteboards and data model diagrams.

V10 extends the company’s already fairly comprehensive ability to plug into a wide range of data sources, with over 100 pre-built transformations and over 200 connectors. By providing a layer of interface above the systems management level, a customer can gain a level of insulation from the rapidly changing world of Big Data, with its bewildering menagerie of technologies, some of which disappear from current fashion almost as soon as you have figured out where they fit. Presenting a common interface across traditional and new data sources enables organisations to minimise wasted skills investment.

As well as quite new features such as Live Data Map, there are an array of incremental updates to the established technology elements of the Informatica suite, such as improved collaboration capability within the data quality suite, and the ability of the data integration hub to span both cloud and on-premise data flows. A major emphasis of the latest release is performance improvement, with much faster data import and data cleansing.

With Informatica having recently gone private, it will be comforting for their customers that the company is investing as much as ever in its core technology, as well as adding new and potentially very useful new elements. The data management landscape is increasingly fragmented and complex these days, so hard pressed data architects need all the help that they can get.

]]> Mon, 16 Nov 2015 15:08:44 MST
Leaving Las Vegas The Informatica World 2015 event in Las Vegas was held as the company was in the process of being taken off the stock market and into private ownership by private equity firm Permira and a Canadian pension fund. The company was still in its quiet period so was unable to offer any real detail about this. However my perception is that one key reason for the change may be that the company executives can see that there is a growing industry momentum towards cloud computing. This is a challenge to all major vendors with large installed bases, because the subscription pricing model associated with the cloud presents a considerable challenge as to how vendors will actually make money compared to their current on-premise business model. A quick look at the finances of publicly held cloud-only companies suggest that even these specialists have yet to really figure it out, with a sea of red ink in the accounts of most. If Informatica is to embrace this change then it is likely that it’s profitability will suffer, and private investors may offer a more patient perspective than Wall Street, which is notoriously focused on short-term earnings. It would seem to me that there is unlikely to be any real change of emphasis around MDM from Informatica, given that it seems to be their fastest growing business line.

On the specifics of the conference, there were announcements for the company around its major products, including its recent foray into data security. The most intriguing was the prospect of a yet to be delivered product called “live data map”. The idea is to allow semantic discovery within corporate data, and allow end-users to vote on how reliable particular corporate data elements are, rather as consumers vote for movies on IMDB or rate others on eBay. The idea is that this approach may be particularly useful as companies have to deal with “data lakes” where data will have little or none of the validation applied to it that would (in theory) be the case with current corporate systems. The idea is tantalising but this was a statement of direction rather than a product that was ready for market.

The thing that I found most useful was the array of customer presentations, over a hundred in all. BP gave an interesting talk about data quality in the upstream oil industry, which has typically not been a big focus for data quality vendors (there is no name and address validation in the upstream). Data governance was a common theme in several presentations, clearly key to the success of both master data and data quality projects. There was a particularly impressive presentation by GE Aviation about their master data project, which had to deal with very complex aeroplane engine data.

Overall, Informatica’s going private should not have any negative impact on customers, at least unless its executives end up taking their eye off the ball due to the inevitable distractions associated with new ownership.

]]> Sat, 16 May 2015 11:25:47 MST
The Teradata Universe The Teradata Universe conference in Amsterdam in April 2015 was particularly popular, with a record 1,200 attendees this year. Teradata always scores unusually high in our customer satisfaction surveys, and a recurring theme is its ease of maintenance compared to other databases. At this conference the main announcement continued this theme with the expansion of its QueryGrid, allowing a common administrative platform across a range of technologies. QueryGrid can now manage all three major Hadoop implementations, MapR, Cloudera and HortonWorks, as well as its own Aster and Teradata platforms. In addition the company announced a new appliance, the high-end 2800, as well as a new feature they call the software-defined warehouse. This allows multiple Teradata data warehouses to be managed as one logical warehouse, including allow security management across multiple instances.

The conference had its usual heavy line-up of customer project implementation stories, such as an interesting one by Volvo, who are doing some innovative work with software in their cars, at least in the prototype stage. For example in one case the car sends signals to any cyclists with a suitably equipped helmet, using a proximity alert. In another example the car can seek out spare parking spaces in a suitably equipped car park. A Volvo now has 150 computers in it, generating a lot of data that has to be managed as well as creating new opportunities. Tesla is perhaps the most extreme example so far of cars becoming software-drive, in their case literally allowing remote software upgrades in the same way that occur with desktop computers (though hopefully car manufacturers will do a tad more testing than Microsoft in this regard). The most entertaining speech thatI saw was by a Swedish academic, Hans Rosling, who advises UNICEF and the WHO and who gave a brilliant talk about the world’s population trends using extremely advanced visualisation aids, an excellent example of how to display big data in a meaningful way.

]]> Thu, 23 Apr 2015 11:24:04 MST
The Private Side of Informatica Yesterday Informatica announced that it was being bought, not by a software firm but by a private equity company Permira. At ÂŁ5.3 billion, this values the data integration vendor at over five times the billion dollar revenue that Informatica saw in 2014, compared to an industry average of 4.4 recently. This piece of financial engineering will not change the operational strategy for Informatica. Rather it is a reflection of a time when capital is plentiful and private equity firms are feeling bullish about the software sector. Tibco and Dell have followed a similar route. Company managers will not have to worry about quarterly earnings briefings to pesky financial analysts, and will instead be accountable only to their new owners. However, private equity firms seek a return on their investment, usually leveraging plenty of debt into such deals (debt is tax efficient compared to equity), and can be demanding of their acquisitions. From a customer viewpoint there is little to be concerned about. One exit for the investors will be a future trade sale or return to the stock market, so this deal does not in itself change the picture for Informatica in terms of possible acquisition by a bigger software company one day.

]]> Wed, 8 Apr 2015 09:55:28 MST
Snowflake is a New SQL Database Server for the Cloud
One of these new kids on the block is Snowflake Elastic Data Warehouse by Snowflake Computing. It's not available yet, we still have to wait until the first half of 2015, but information is available and beta versions can be downloaded.

Defining and classifying Snowflake with one term is not that easy. Not even with two terms. To start, it's a SQL database server that supports a rich SQL dialect. It's not specifically designed for big data environments (the word doesn't even appear on the website), but to develop large data warehouses. In this respect, it competes with other so-called analytical SQL database servers.

But the most distinguishing factor is undoubtedly that it's architected from the ground up to fully exploit the cloud. This means two things, one, it's not an existing SQL database server that has been ported to the cloud, but its internal architecture is designed specifically for the cloud. All the lines of codes are new, no existing open source database server is used and adapted. It makes Snowflake highly scalable and really elastic, which is why organizations turn to the cloud.

Second, it also means that the product can really be used as a service. It only requires a minimal amount of DBA work. So, the term service doesn't only mean that it offers a service-based API, such as REST or JDBC, but that the product has been designed to operate hassle-free. Almost all the tuning and optimization is done automatically.

In case you want to know, no, the name has no relationship with the data modeling concept called snowflake schema. The name snowflake has been selected because many of the founders and developers have a strong relationship with skiing and snow.

Snowflake is a product to keep an eye on. I am looking forward to its general availability. Let's see if there is room for another database server. If it's sufficiently unique, there may well be.

]]> Wed, 29 Oct 2014 02:20:51 MST
Pneuron is a Platform for Distributed Analytics Pneuron. Initially you would say it's a jack of all trades, a Swiss army knife, but it isn't.

Pneuron is a platform that offers distributed data and application integration, data preparation, and analytical processing. With its workflow-like environment, a process can be defined to extract data from databases and applications, to perform analytics natively or to invoke different types of analytical applications and data integration tools, and to deliver final results to any number of destinations, or to simply persist the results so that other tools can easily access them.

Pneuron's secret is its ability to design and deploy distributed processing networks, which are based on (p)neurons (hence the product name). Each pneuron represents a task, such as data extraction, data preparation, or data analysis. Pneurons can run across a network of machines, and are, if possible, executed in parallel. It reuses the investment that companies have already made in ERP applications, ETL tools, and existing BI systems. It remains agnostic to and coordinates the use of all those prior investments.

Still, Pneuron remains hard to clarify. It's quite unique in its sort. But whatever the category is, Pneuron is worth checking out.

]]> Tue, 28 Oct 2014 10:03:17 MST
QueryGrid is New Data Federation Technology by Teradata QueryGrid at their Partners event in Nashville, Tennessee. QueryGrid allows developers of the Teradata database engine to transparently access data stored in Hadoop, Oracle, and Teradata Aster Database. Users won't really notice that data is not stored in Teradata's own database, but in one of the other data stores.

The same applies to developers using the Teradata Aster database. With QueryGrid they can access and manipulate data stored in Hadoop and the Teradata Database.

With QueryGrid, for both Teradata's database servers, access to big data stored in Hadoop becomes even more transparent than with its forerunner SQL-H. QueryGrid allows Teradata and Aster developers to seamlessly work with big data stored in Hadoop without the need to learn the complex Hadoop APIs.

QueryGrid is a data federator, so data from multiple data stores can be joined together. However, it's not a traditional data federator. Most data federators sit between the applications and the data stores that are being federated. It's the data federator that is being accessed by the applications. QueryGrid sits between, on one hand, the Teradata database or the Aster database, and, on the other hand, Hadoop, Oracle, and the Teradata database and the Aster database. So, applications do not directly access QueryGrid.

QueryGrid supports all the standard features one expects from a data federator. What's special about QueryGrid is that it's deeply integrated with Teradata and Aster. For example, developers using Teradata can specify one of the pre-built analytical functions supported by the Aster database, such as sessionization and connection analytics. The Teradata Database will recognize the use of this special function, knows it's supported by Aster, and automatically passes the processing of the function to Aster. In addition, if the data to be processed is not stored in Aster, but, for example, in Teradata, the relevant data is transported to Aster so that the function can be executed. This means that, due to QueryGrid, functionality of one of the Teradata database servers becomes available for the other.

QueryGrid is definitely an enrichment for organizations that want to develop big data systems by deploying the right data storage technology for the right data.

]]> Tue, 28 Oct 2014 09:59:40 MST
Pneuron, a Platform for Distributed Analytics? Pneuron. Initially you would say it's a jack of all trades, a Swiss army knife, but it isn't.

Pneuron is a platform that offers distributed data and application integration, data preparation, and analytical processing. With its workflow-like environment, a process can be defined to extract data from databases and applications, to perform analytics natively or to invoke different types of analytical applications and data integration tools, and to deliver final results to any number of destinations, or to simply persist the results so that other tools can easily access them.

Pneuron's secret is its ability to design and deploy distributed processing networks, which are based on (p)neurons (hence the product name). Each pneuron represents a task, such as data extraction, data preparation, or data analysis. Pneurons can run across a network of machines, and are, if possible, executed in parallel. It reuses the investment that companies have already made in ERP applications, ETL tools, and existing BI systems. It remains agnostic to and coordinates the use of all those prior investments.

Still, Pneuron remains hard to clarify. It's quite unique in its sort. But whatever the category is, Pneuron is worth checking out (

]]> Mon, 27 Oct 2014 06:23:42 MST