Experian data breach report and analysis

Tags

, , ,

A good friend of mine (Howard Durdle) is a security expert and CSO, he pointed out this really good Twitter trail breaking down the newly published report on the massive Experian data breach.

https://twitter.com/sawaba/status/1072319618352627714

You don’t need to be a geek or a security expert to understand what is being said here, and more importantly reading between the lines as they say, the likely root causes. For me, this all points to cultural challenges, where organisational pressures or a lack of appreciation by mid level decision makers struggle to appreciate the need to invest in non functional factors such as security, patching and maintenance.

Sadly, Experian aren’t the first with this challenge, and won’t be the last. With DevSecOps etc the people building the software will understand the issue. But, I think we need to be working with educating the business stakeholders on the need for dealing with NFRs, and the need to prioritise certain types of issues.

UKOUG Conference 2018

Tags

, , , , , ,

With the start of December comes the UK Oracle User Group conference, or to be more precise the Independent UKOUG.  This year the conference is back in Blackpool, a slightly smaller venue than the ICC in Birmingham, but in many respects that made the event feel more vibrant and busy.

The user group also announced some of the changes it is making going forwards reflecting the changing needs of its members – SIGs being largely superseded by multi-stream single day events (Summits) with the Call for Papers for the first of these here.  A wider list of Oracle related Calls for Papers is available here.

Of course being a UKOUG Volunteer, I have been presenting and co-presenting.  The slides from my presentation sessions can be found at:

This was an abridged version of the an update on my presentation here 

My second presentation was a review of Oracle Integration Cloud, in which I presented some customer use cases of OCI  as part of a wider presentation on OIC by Sid Joshi

This was followed on the second day with two API based sessions, the first being a deep dive into custom API Policies on the Oracle API Platform.

The final session, was another short one looking at Apiary which was primarily a demo of what the solution can do.

On top of trying to keep up with my usual workload – a very hectic couple of days.

London Oracle Developer Meet-up – November 18

Tags

, , , , , ,

Another Oracle Developer Meet-up took place in London yesterday. This meet-up focused on Terraform and Microservices. The summary of the evening slides:

Chris Hollies’ slides can be found at here.  As demo’s aren’t included in the deck, the following videos are alternatives:

Our second session, that I presented on how we can establish paths of transition that make it easy to adopt microservices. The presentation material for this is available here:

My next Packt Project has been announced!

Tags

, , , , ,

My next Packt project (via O’Reilly) is not a book, but a short online training course about  good API design, API 1st and some tools that can support an API 1st methodologies. Register for the session here.

It includes a closer look at cloud tools such as Oracle’s excellent Apiary (sorry if this sounds like a sales pitch, but it is a good tool and the words of the found of RestLet confirm this) along with SwaggerHub and a few other options.

A good API goes beyond just the payload definition and I’ll walk through the other considerations and explain why these other areas are important.

Microservices Patterns Book

Tags

, , ,

Earlier this year, I wrote a short post on Chris Richardson’s book Microservice Patterns (Praise for Microservice Patterns). When the I read the book I mind mapped my notes which can be seen at Mindmap Index or access directly here.  The mind map is no substitute, but should act as a reasonable aide-memoire.

We would highly recommend getting and reading the book.

UK Oracle User Group Annual Conference

Tags

, , , , , , , , , , , ,

As we rapidly approach the end of the Year.  We’re still pretty busy.  The UK Oracle User Group Annual Conference covering Oracle Technology (Tech18) ranging from Database on premises to Polyglot development in Oracle and other clouds passing through Hybrid Integration, SOA and so on.  Alongside this is also Apps18 which covers Oracle Applications from EBusiness Suite and Seibel to the Fusion cloud solutions such HCM, Financials, Taleo and so on. Then finally the 3rd part covering all things JDEdwards.

tech18

Being on the committee for the conference means that I have been heavily involved in developing the conference agenda and choosing sessions.  So I can say with great confidence that there is a very diverse range of sessions from highly respected SMEs and presenters along with new blood presenting on subjects from Oracle JET to Robotic Process Automation (RPA) for example.

I hope we’ll see you there.

Value of Technical Capability Models

Tags

, , ,

The use of Technical Capability models is not something I have seen a lot of use of, which is a little unfortunate as they can provide tremendous insight into an organizations IT needs.

Typically you want to use the Technical capability model to be used in conjunction with a business capability model, and this is where things can get tricky as developing the business views can take time.  I came across this short video which focuses on the more business aspect but helps explain the ideas behind the models:

Note how the model is largely groups of capabilities that happen in the business. Underlying this kind of diagram you would have a brief explanation of each capability.  If you want to go all out on EA modelling then you can link the capabilities to the documented associated processes etc.

Independently, the ideal is to then identify the technical capabilities that are likely to be needed. This will provide a similar looking model. The technical capabilities are probably best drawn from industry best practices, and specific business needs. The model should be completely product agnostic. The real value comes in by then mapping the technical capabilities to which business capabilities use.

This will now help inform a number of decisions identity areas of focus.  The technical capabilities should have mappings or fall into one of the following states, with the associated reasons:

  • Maps To Business Capabilities
    • This is healthy
  • Technology is Being Used but no Business Mapping
    • Gap in the business capability model?
    • Nuance of the business model not understood by IT?
    • Redundant processes being performed?
  • Business Process with no Technology
    • Opportunity for business improvement?
    • Genuinely no value in applying technology e.g. Business Value is something is hand made?
    • Capability delivered by Shadow IT?
  • Doesn’t Map to Any Business Capabilities
    • Capability isn’t needed and therefore jettisoned OR
    • Potential capability that the business are unaware or haven’t understood what can be offered

With the exception of the 1st condition, the other scenarios things should be examined more closely and adjust the models accordingly.

With the capability models linked and the miss matches addressed.  The Technical Capability model can really deliver value by linking the capabilities to the actual technologies being used.  Very quickly it is possible to see details such as:

  • Technology weaknesses (i.e. a key business area is not well supported by IT e.g. products being mapped are End of Life, not got the level of support). Whilst some of these will be ‘no brainers’ it is more than likely a free surprises will show up
  • Technology duplication – sometimes we’ll see multiple products in one area, can the product list be rationalized to maximize license investment? Would it be more cost effective to invest 1 one high-end product and eliminate lots of smaller niche pieces?
  • Where IT investment will likely improve key capabilities vs investment on niche capabilities
  • How technology change can impact the business, for example replacing a Content Management System may impact an organizations online presence, but it may also show to impact how we deliver support services to customers.
  • If the business prioritize a specific area, how does that map onto IT systems and processes?

Whilst a lot of this will seem pretty obvious, it will uncover unexpected details and most importantly provide a relatively simple set of visualizations as cross references that help understand the business and explain the impact of IT related decisions to the business in their terms.

The following deck provides a presentation on the value of Technical Capability Models:

Defining Boundaries for Logical Gateways on the API Platform a multi cloud / multi region context

Tags

, , , ,

The Oracle API Platform takes a different licensing model to many platforms, rather than on CPU it works by the use of Logical Gateways and blocks of 25 million successful API calls per month. This means you can have as many actual gateway nodes as you like within a logical group to ensure resilience as you like, essentially how widely you deploy the gateways is more of a maintenance consideration (i.e. more nodes means more gateways to take through a maintenance process from the OS through to the gateway itself).

In our book (here) we described the use of logical gateways (groups of gateway nodes operating together) based on the classic development model, which provides a solid foundation and can leverage the gateway based routing policy very effectively.

logical partitions

But, things get a little trickier if you move into the cloud and elect to distribute the back end services geographically rather than perhaps have a single global instance for the back-end implementation and leverage technologies such as Content Delivery Networks to cache data at the cloud edge and their rapid routing capabilities to offset performance factors.

map1

Classic Global split of geographies

Some of the typical reasons for geographically distributing solutions are …

  • Low hit rate on data meaning caching solutions like CDNs are unlikely to yield performance benefits wanted and considerable additional work is needed to ‘warm’ the cache,
  • Different regions require different back end implementations ordering of products in one part of the world maybe fulfilled using a partner, but in another it is directly satisfied,
  • Data is subject to residency/sovereignty rules – consider China for example. But Germany and India also have special considerations as well.

So our Global splits start to look like:

map2

Global Split now adding extra divisions for India, China, Russia etc

The challenge that comes, is that the regional routing which maybe resolved on the Internet side of things through Geo Routing such as the facilities provided by AWS Route53 and Oracle’s Dyn DNS as a result finding nearest local gateway. However Geo DNS may not be achievable internally (certainly not for AWS), as a result routing to the nearest local back-end needs to be handled by the gateway. Gateway based routing can solve the problem based on logical gateways – so if we logically group gateways regionally then that works. But, this then conflicts with the use of gateway based routing for separation of Development, Test etc.

Routing Options

So, what are the options? Here are a few …

  • Make you Logical divisions both by environment and by region – this is fine if you’re processing very high volumes i.e. hundreds of millions or more so the cost of additional Logical gateways is relatively small it the total budget.

map3

Taking the geo split and applying the traditional layers as well has increased the number of Logical gateways

This problem can be further exacerbated, if you consider many larger organisations are likely to end up with different cloud vendors in the same part of the world for example AWS and Azure, or Oracle and Google. So continuing the segmentation can become an expensive challenge as the following view helps show:

map4

It is possible contract things slightly by only have development and test cloud services where ever your core development center is based. Note that in the previous and next diagrams we’ve removed the region/country specific gateway drivers.

map5

  • Don’t segment based on environment, but only on region – but then how do you control changes in the API configuration so they don’t propagate immediately into production?
  • Keep the existing model but clone APIs for each region – certainly the tooling we’ve shared (Managing API Policy Versioning in Oracle API Platform) makes this possible, but it’s pretty inelegant and error prone as it be easy to forget to clone a change, and the cloning logic needs to be extended to take into account the bits that must be region specific.
  • Assuming you have a DNS address for the target, you could effectively rewrite the resolution of the address by changing its meaning in each gateway node’s host file. Inelegant, but effective if you have automated deployment and configuration of your gateway servers.
  • Header based routing with the region and environment as header attributes. This does require either the client to set the values (not good as you’re revealing to your API consumer traits of the implementation), or you apply custom policies before the header based routing that insert those attributes based on the gateway’s location etc.
  • Build a new type of gateway based routing which allows both the environment (dev, test etc) and location (region) to inform the routing,

Or, and the point of this blog, use gateway based routing and leverage some intelligent DNS naming and how the API Platform works with a little bit of Groovy or a custom Java policy.

Continue reading

Helidon and the embracing of micro services

Tags

, , , , , , , , , , , , , ,

XEYO9H51_400x400Oracle have announced another Open Source project called Helidon (Helidon.io) as a microservices platform built on top of Netty (which is built around a contemporary async model). If you look at the literature you’ll note two flavours one called SE which aligns to the programming characteristics or Node.js – asynchronous. The other is MP which aligns to the rapidly evolving J2EE MicroProfile which essentially follows a coding style along the lines of J2EE annotations.

Whilst it is perfectly possible to run Helidon based solutions in either profile natively, it is clearly geared up for running in any Docker+Kubernetes style environments such as Oracle Kubernetes Cloud (OKE) or even ACCS. Helidon website provides the means to quickly package your solution into Docker.

In both SE and MP forms the dependencies are hugely stripped back compared to the giants of WebLogic, GlassFish (now EE4J with the handover of J2EE to the Eclipse Foundation.

It does raise a number of questions what are the futures of WebLogic and Oracle support of EE4J (some answers here, but no Oracle specific)? WebLogic has never been the fastest to align to the latest J2EE standards (EE8 standard released last year should be become available sometime this year for WLS – see here), but today it is so central to many Oracle products it isn’t going to disappear, will it just end up slowly ebbing away? Which would be a shame, I have heard it said by Oracle insiders that if the removing the end of one component could be sorted then WebLogic could be easily be configured to have a small lightweight foot print.

The other interesting thing is what is happening to Open Source and what it might mean for the future.  Up until perhaps 3 or 4 years ago the use of open source you would think of software made available on of a small group of key sponsored organisations such as Apache, Linux Foundation, Eclipse which through its governance framework, provided levels of equality and process. As result, levels of quality, trust crucially married to strong level of use and contribution that meant that to extrapolate Linus’ Law – bugs could be weeded out quickly and easily.  However with the advent of services like GitHub, whilst it has become easier to contribute and fulfill Linus’ Law. It also means that it is very easy to offer a solution that is Open Source. But, doesn’t necessarily garner the benefits of Linus’ Law and the other preconceptions we often have about Open Source such as it is/can be as good as a commercial solution. After all, throwing code into GitHub does not guarantee the many eyes/contributors. Nor does it assure the governance, checks and balances that an Apache project for example will assure.

It is important to say that I am not against github, in fact I am very much pro, and use GitHub myself to host utilities I make freely available (here). The important point is we have to be more aware of what open source actually means, in each context and can’t assume it is likely to have a strong community driving things forward, and critically dealing with bugs, and ensuring quality assurance processes are realized.

Helidon joins a number of other offerings in this space such as Micronaut (also built on Netty). Micronaut takes a different approach to Helidon by adopting a strong inversion of control / injection approach. In and in some respects feels a bit like the earlier versions of JBoss Application Server (now known as WildFly) which had a small footprint and made good use of Spring. This is in addition to Spark and Javalin. There is a good illustration of the different servers from Dmitry Kornilov  shown below and the associated article can be seen here (who also happens to the Lead Engineer for Helidon).

helidon_landscape

Unlike Spark, Micronaut and a couple of others, Helidon only supports Java today rather than JDK based languages such as Kotlin and Groovy for example, but is the only solution that can cover both the Micro Profile and Framework domains. It also has a challenge in terms of getting established, Spark has been around since 2015. Javalin appeared in May 2017. The J2EE Micro Profile standard is also driving a lot forward progress, so getting established will continue to get harder. Liberty, another Micro Profile solution is based on IBM WebSphere and Thorntail has links to WildFly (more here). We hope that it will make good headway with a Reactive engine in the form of Netty and avoiding IoC or introspection from the core should mean it will be very quick (particularly during startup) but it needs to show its value differentiation and importantly build a strong community contributing to it.

We hopefully will get the chance to further experiment with Helidon and write more about it here.

ODC Appreciation Day : ODC Podcasts

Tags

, , ,

So I’m probably up bending the spirit of the ODC Appreciation Day, as the focus should be on tech. But this year I’d like to flag the podcasts put together by Bob Rhubart. These are as at-least diverse in subject as the Oracle technology portfolio. One month the podcast will be about API and the next AI, from Women in Technology to NoOps. Even if the subject is not an area that maybe of interest to you technically, the podcasts are still worth a listen you’ll encounter at-least one nugget of interesting information.

I have been fortunate enough to participate in the recording of a couple of podcasts. That combined with having In a previous roles been involved in recording and editing audio and video together means I can appreciate the effort that goes into producing the podcast. From gathering a group of different people together, often from around the world into a call isn’t always easy. Then editing the conversation to smooth out the introductions, pauses that can occur as all those non verbal cues are lost, shed any background noise to give a cohesive podcast takes time and practise.

Bob and Javed might make it look easy when recording Periscope videos at Open World and other events, but that comes from being able to control the environment – something you can’t do when participants are so far apart.