• Home
  • Site Aliases
    • www.cloud-native.info
    • oracle.cloud-native.info
    • Phil-Wilkins.uk
  • About
    • Background
    • Presenting Activities
    • Internet Profile
      • LinkedIn
    • About
  • Books & Publications
    • Logging in Action with Fluentd, Kubernetes and More
      • Logging in Action with Fluentd – Book
      • Fluentd Book Resources
      • Log Generator
    • API & API Platform
      • API Useful Resources
    • Oracle Integration
      • Book Website
      • Useful Reading Sources
    • Publication Contributions
  • Resources
    • GitHub
    • Oracle Integration Site
    • Oracle Resources
    • Mindmaps Index
    • Useful Tech Resources
    • Python Setup & related stuff
  • Music
    • Music Reading
    • Music Listening

Phil (aka MP3Monster)'s Blog

~ from Technology to Music

Phil (aka MP3Monster)'s Blog

Tag Archives: data

IAM and IDCS do more than support AuthZ

01 Monday May 2023

Posted by mp3monster in development, General, Oracle, Technology

≈ Leave a comment

Tags

data, development, OCI, Oracle, SCIM, Security, software

We could solve this with custom integrations, or we can exploit an IETF standard called SCIM (System for Cross-domain Identity Management). The beauty of SCIM is that it brings a level of standardization to the mechanics of sharing personal identity information, addressing the fact that this data goes through a life cycle.

While Oracle’s IDCS and IAM support identity management for authentication and authorization for OCI and SaaS such as HCM, SCM, and so on. Most software ecosystems need more than that. If you have personalized custom applications or COTS or non-Oracle SaaS that need more than just authentication and need some of your people’s data needs to be replicated.

The lifecycle would include:

  • Creation of users.
  • Users move in and out of groups as their roles and responsibilities change.
  • User details change, reflecting life events such as changing names.
  • Users leave as they’re no longer employees, deleted their account for the service, or exercise their right to be forgotten.

It means any SCIM-compliant application can be connected to IDCS or IAM, and they’ll receive the relevant changes. Not only does it standardize the process of integrating it helps handle compliance needs such as ensuring data is correct in other applications, that data is not retained any longer than is needed (removal in IDCS can trigger the removal elsewhere through the SCIM interface). In effect we have the opportunity to achieve master data management around PII.

SCIM works through the use of standardized RESTful APIs. The payloads have a standardized set of definitions which allows for customized extension as well. The customization is a lot like how LDAP can accommodate additional data.

The value of SCIM is such that there are independent service providers who support and aid the configuration and management of SCIM to enable other applications.

Securing such data flows

As this is flowing data that is by its nature very sensitive, we need to maximize security. Risks that we should consider:

  • Malicious intent that results in the introduction of a fake SCIM client to egress data
  • Use of the SCIM interface to ingress the poisoning of data (use of SCIM means that poisoned data could then propagate to all the identity-connected systems).
  • Identity hijacking – manipulating an identity to gain further access.

There are several things that can be done to help secure the SCIM interfaces. This can include the use of an API Gateway to validate details such as the identity of the client and where the request originated from. We can look at the payload and validate it against the SCIM schema using an OCI Function.

We can block the use of operations by preventing the use of certain HTTP verbs and/or URLs for particular or all origins.

Share this:

  • Twitter
  • Facebook
  • LinkedIn
  • Print
  • Pocket
  • Email
  • Tumblr
  • Reddit
  • Pinterest
  • WhatsApp
  • Skype

Like this:

Like Loading...

Data Integration with Oracle

15 Tuesday Sep 2020

Posted by mp3monster in Cloud, General, Oracle, Technology

≈ Leave a comment

Tags

Cloud, data, DIPC, integration, marketplace, OCI, ODI, ODI CS, Oracle, Spark

Oracle’s data integration product landscape outside of GoldenGate has with the arrival of Oracle Cloud been confusing at times. This has meant finding the right product documentation can be challenging, and knowing which product to use in your own technology road-map can be harder to formulate. I believe the landscape is starting to settle now. But to understand the position, let’s look at the causes of disturbance and the changes that have occurred.

Why the complexity?

This has come from I think a couple of key factors. The organizational challenges triggered by Thomas Kurian’s departure which has resulted in rather than the product organization being essentially in three parts aligning roughly to Infrastructure, Platform and Applications to being two Infrastructure and Apps. Add to this Oracle’s cloud has gone through two revolutions. Generation 1 now called Classic was essentially a recognition that they needed an answer to Microsoft, Google and AWS quickly (Oracle are now migrating customers off classic). Then came Generation 2, which is a more considered strategy which is leveraging not just the lowest level stack of virtualization (network and compute), but driving changes all the way through the internals of applications by having them leverage common technologies such as microservices along with a raft of software services such as monitoring, logging, metering, events, notifications, FaaS and so on. Essentially all the services they offer are also integral to their own offerings. The nice thing about Gen2 is you can see a strong alignment to CNCF (Cloud Native Compute Foundation) along with other open public standards (formal or de-facto such as Microprofile with Helidon and Apache). As a result despite the perceptions of Oracle, modern apps are standard a better chance of portability.

Impact on ODI

Oracle’s Data Integration capabilities, cloud or otherwise have been best known as Oracle Data Integrator or ODI. The original ODI was the data equivelent of SOA Suite implementing Extract Load Transform (ELT) rather than ETL as it meant the Oracle DB was fully leveraged. This was built on the WebLogic server.

Along Came Cloud

Oracle cloud came along, and there is a natural need for ODI capabilities. Like SOA Suite, the first evolution was to provide ODI Cloud Service just like SOA Suite had SOA Cloud Service. Both are essentially the same on-premises product with UIs to manage deployment and configuration.

ODI’s cloud transformation for the cloud lead ODI CS evolving into DIPC (Data Integration Platform Cloud). Very much an evolution, but with a more web centered experience for designing the integrations. However, DIPC is no longer available (except possible to customers already using it).

Whilst DIPC had been evolving the requirement to continue with on-premises ODI capabilities is needed. Whilst we don’t know for sure, we can speculate that there was divergent development happening creating overhead as ODI as an on-prem solution remained. We then saw the arrival of ODI Marketplace, this provides an easier transition, including taking into account licensing considerations. DIPC has been superseded by ODI Marketplace.

Marketplace

Oracle has developed a Marketplace just like the other major players so that 3rd party vendors could offer their technologies on the Oracle cloud, just as you can with Azure and AWS. But Oracle have also exploited it to offer their traditional products normally associated with on-premise deployments in the cloud. As a result we saw ODI Marketplace. A smart move as it offers the possibility of exploiting on-prem licensing into the cloud along with portability.

So far the ODI capabilities in its different forms continued to leverage its WebLogic foundations. But by this time the Gen2 Oracle Cloud and the organizational structures behind it has been well established, and working up the value stack. Those products in the middleware space have been impacted by both the technology strategies and organization. As a result API for example have been aligned to the OCI native space, but Integration Cloud has been moved towards the Apps space. in many respects this reflects a low code vs code native model.

OCI ODI

Earlier this year (2020) Oracle launched a brand new ODI product, to use its full name Oracle Cloud Infrastructure Data Integration. This is an OCI native (i.e. Gen2 solution leveraging microservices technologies).

This new product appears to be a very much ground up build as it exploits Apache Spark and Function as a Service (FaaS) as foundational elements. As a ground up build, it doesn’t inherit all the adapters the original ODI can offer. This does mean as a solution today it is very focused on some specific needs around supporting the data movement between the various Oracle Cloud storage and Database as a Service solutions rather than general ingestion and extraction processes.

Conclusion

Products are evolving, but the direction of travel appears to be resolving. But we are still in that period where there are capability gaps between the Gen2 native solution and the traditional ODI via Marketplace solution. As a result the question becomes less which product, but when and if I have to invest in using ODI Marketplace how to migrate when the native product catches up.

Share this:

  • Twitter
  • Facebook
  • LinkedIn
  • Print
  • Pocket
  • Email
  • Tumblr
  • Reddit
  • Pinterest
  • WhatsApp
  • Skype

Like this:

Like Loading...

Costs in Multi-Cloud

28 Wednesday Aug 2019

Posted by mp3monster in General, Oracle, Technology

≈ Leave a comment

Tags

AWS, Azure, Cloud, costs, data, ExpressRoute, FastComnect, Oracle, OracleCloud

Over the last couple of years, we have seen growing references to multi-cloud. That is to say, people are recognizing that organisations, particularly larger ones are ending up with cloud services for many different vendors. This at least in part has come from where departments within an organization can buy meaningful resources within their local budgets.

Whilst there is a competitive benefit of the recent partnership agreement between Microsoft and Oracle given the market margin AWS has in comparison to everyone else. Irrespective of the positioning with AWS, this agreement has arisen because of the adoption of multi-cloud. It also provides a solution to the problem of running highly resilient Oracle database setups using RAC, DataGuard etc can be made available to Azure without risk to security and the all-important network performances that are essentially to DB operation. Likewise, Oracle’s SaaS offerings are sector leaders if not best in place, something Microsoft can’t compete with. But at the other end, regardless of Oracle’s offerings, often organisations will prefer Microsoft development ecosystem because of the alignment to office tooling, the ease of building solutions quickly.

Multi-cloud even with the agreements like the Microsoft and Oracle one (See here), doesn’t mean there won’t be higher costs in crossing clouds. Let’s see where the costs reside …

  • Data egress (and in some cases ingress as well) from clouds costs. Whilst the ingress costs have been eliminated because it can be seen as a barrier to selling services, particularly big data. Data egress can, however, be an issue. Oracle has made this cost very low to be almost negligible, but not necessarily others as the following comparison shows …


  • Establishing the high-performance connections That the agreement supports needed between Azure and Oracle cloud is the same tech for the cloud to the ground do incur a cost. In Oracle’s case, there is a fee for the connection (not a large cost, but one that exists none the less) plus any traffic fees the provider of the network connection spanning the data centre locations. This is because you’re leasing capacity on someone’s dedicated fibre or MPLS services. Now, this should prove to be small as part of the enabler of this offering is that both Oracle and Microsoft cloud DCs are often actually physically provided by the same provider or at-least the centres a physically pretty close, as a result of both companies gravitating to locations close together because of the optimal highly available infrastructure (power, telecommunications) legal and commercial factors along with the specialist skills needed.

If data egress is the key challenge to costs, what drives the data egress beyond the obvious content for user interfaces? …

  • Obviously, you have the business data flows, some of these flows will be understood by the business community. But not all, this is down to the way data from the cloud can be exposed to another. For example inefficient services with APIs that require frequent polling and not using expressing the request efficiently, rather than perhaps express the request using HTTP header attributes and other efficiencies or even utilize frameworks such as webhooks so data can be pushed.
  • High-speed data access, often drives data replication having databases in multiple clouds with mirror image data in each location even if the majority of the data is not necessarily needed. This can happen with technologies such as Kafka which for non compacted topics will mean every event can be replicated even if that event has a short lifetime.
  • One of the hidden costs is the operational tasks of gathering logs to a combined view so end to end insights can be obtained. A detailed log can actually yield more ‘data’ by volume than the business flows themselves because it is semi-structured, and intended to be very readable and at the most granular level there to help debug and test.

In addition to the data flows, you need to consider how other linkages in addition to the Oracle-Azure connection are involved. In the detailed documentation, it is not possible to get your on-premises location connected to one of the clouds (e.g. Oracle FastConnect, and then assume your traffic can hop to Azure via the bridge using FastConnect and Azure’s ExpressRoute.  To add performance to your solution parts in both Azure and Oracle Cloud, you still need to have FastConnect and ExpressRoute configured to your on-premises location. This, of course, may impact how bulk data for lift and shift app use cases such as EBS may be applied. For example, if you choose to regularly bulk data transfer between on-premise and EBS via the app/middleware tier rather than via direct DB, and that mid-tier is running in Azure – you will need both routes established.

Conclusion

There is no doubt that the Oracle-Azure cloud to cloud linkage is a step forward, but ‘the devil is in the details‘ as the saying goes. To optimize the benefits and savings we’d suggest that you;

  • you’ll need to think through your use cases – understand data flow and volume (someone bulk syncing application data with a data warehouse?),
  • define a cloud data strategy – to layout principles, approaches and identify compliance needs, this is particularly helpful for custom solution development, so the right level of log data is consolidated with the important details, data retention addresses compliance requirements and doesn’t ratchet up unnecessary costs (there is a tendency to horde data just in case – if this is really wanted, think about how its stored),
  • based on business common usage models define a simple forecasting formula – being able to quantify data costs will always make it easier to challenge back data hoarding tendency,
  • confirm the inter-cloud network vendor charges when working with multi-cloud.

Share this:

  • Twitter
  • Facebook
  • LinkedIn
  • Print
  • Pocket
  • Email
  • Tumblr
  • Reddit
  • Pinterest
  • WhatsApp
  • Skype

Like this:

Like Loading...

Mitigating Risks of Cloud Services

08 Monday Dec 2014

Posted by mp3monster in General, Technology

≈ Leave a comment

Tags

Big Data, data, Data Warehouse, RPO, RTO, SaaS, service, Splunk, XaaS

As previously blogged there are risks with using cloud that differ from self hosting solutions. SaaS, PaaS and all the other XaaS offerings aren’t a panacea. Hopefully you won’t become the next Sony as the provider keeps you patched etc. But if you’re using a SaaS provider that goes bust or you get into litigation with your provider, as result losing access to your data. It could be potentially be months whilst the lawyers sort things out. A horrible situation that no one wants to find themselves in. But how to mitigate such risks?

Any half decent SaaS provider should give directly the means to get a view of all your data through a generic or custom report (s), or will should make available the means for providing an export of your data. The later approach may well come with a cost. If your SaaS solution has a lot of data in place – for example a multinational’s HR solution you may want to just target the extract of deltas. This means extra donkey work and someone to ensure it is happening. How frequently that should depend upon your business needs through an agreed Recovery Point Objective and the tolerance to potential data loss as you can assume you’ll lose everything from the last snapshot. If you have middleware in front of your SaaS service you can have a wiretap to reduce the risk here.

Your net position is in the event of a loss or possibly a prolonged service outage (remember even Amazon have had multi-day failures & not all SaaS solutions follow good cloud practise of being able to fail to secondary centres) is that you have your data and can atleast cobble something together to bridge the gap. Unless you SaaS vendor is offering you something very unique then they’re probably going to have competitors that are more than likely to be glade to help you import the data into their solution for you.

All this for a case of paranoia? Well actually you can have harvest a raft of other benefits from taking full data extracts – for example reconciliation with a view to managing data quality – statistics from Experian show the value of resolving discrepancies. This is to say – that you might find data errors between systems as a result things like edge scenarios such as handling errors in the integration layer. To illustrate the point, let’s assume that your web sales channel is via a SaaS provider and you’re receiving the sales into your on premise ERP for fulfilment and accounting. By taking every week all transactions in the SaaS solution you can identify and discrepancies and reconcile any issues between the sales solution, your finance and fulfilment capabilities to ensure what you have sold is what you have accounted for.  If we’re talking about solutions that impact your financial accounting, then for atleast US declarations it maybe necessary to perform such reconciliation in support of Sarbanes Oxley (SOX) requirements.

Add to this a richer data set can be added to your Big Data or Data Warehouse environments allowing you to gain potentially further insights into your activities.

When you are running a hybrid of on premise and cloud solutions or event just cloud but a mix of vendors don’t just think about you application data, but consider whether audit and web traffic information can be retrieved from the vendor – there maybe value in feeding that data into a solution such as Splunk which may then find a pattern of misuse or attack that may not show up with just the monitoring data from your on premise solutions.

The final point I should make, is don’t assume your service provider will let you at the data as described – look at your contracts before any payment or act of agreement. Ideally such checks should be part of your service due diligence activities (along with ESCROW) etc. There are SaaS providers who will consider the data as their property not yours even when the data might be about your employees.

Share this:

  • Twitter
  • Facebook
  • LinkedIn
  • Print
  • Pocket
  • Email
  • Tumblr
  • Reddit
  • Pinterest
  • WhatsApp
  • Skype

Like this:

Like Loading...

Hierarchy of Data Assurance

19 Sunday Oct 2014

Posted by mp3monster in General, Oracle, Technology

≈ Leave a comment

Tags

AIA, data, integrity, maslow, Oracle, reconciliation, SOA

I was discussing the challenges of ensuring that data is protected and proven to to have integrity and that as the data moves through systems that there isn’t data loss. This sort of thing starts at the simplest level with data validation and with the most advanced and greatest investment you have some end to end reconciliation framework.

Obviously this thinking doesn’t work in every environment, for example complex event processing (CEP) your going to just accept the data coming through and if it’s incomplete or data has been lost along the way you just accept it as it is – these conditions will create outliers which will get smoothed out in trends. It is possible you will have created he gaps by dropping data slow to arrive. But for the majority of your run of the mill solutions such as accounting, HR and so on the thinking stands up.

To communicate the idea effectively to senior management on the risks of just focusing on functional delivery and whether there is maturity in the delivery capability we hit on the idea of using a variant of Maslow’s triangle of needs – something I think everyone gets. You can see our representation here:

hierarchy of Data Assurance

The interesting thing is that you could look at the triangle and suggest that typically the more pressed a project is on factors such as volume of functionality, cost and/or time the more likely a project will remain at the bottom of the triangle. But as the width of the triangle at the point of the capabilities realised also reflects on the operational costs. So if you’re at the bottom of the triangle then you’re likely to incur more costs dealing with data issues as the means to detect and then resolve are a lot more restricted.

With frameworks such as those in Oracle’s SOA Suite and AIA it should make it easier to move up atleast part of the triangle, although full end to end reconciliation is more likely to demand more data centric tools, as you probably want to perform by doing batch like assessments.

Share this:

  • Twitter
  • Facebook
  • LinkedIn
  • Print
  • Pocket
  • Email
  • Tumblr
  • Reddit
  • Pinterest
  • WhatsApp
  • Skype

Like this:

Like Loading...

Introducing Canonical Models into a Web Service’d Environment

07 Saturday Jun 2014

Posted by mp3monster in General, Technology

≈ Leave a comment

Tags

Canonical, data, REST, slides, SOAP, Web Service, Web Services, WSDL

I’ve produced my own slide deck on how to adopt canonical data models into an environment that already exists using Web Services and used Slide Share for the 1st time to make a slide deck available.  I hope you find it interesting

 

Share this:

  • Twitter
  • Facebook
  • LinkedIn
  • Print
  • Pocket
  • Email
  • Tumblr
  • Reddit
  • Pinterest
  • WhatsApp
  • Skype

Like this:

Like Loading...

Enterprise Security – A Data Centric Approach – A brief review

05 Wednesday Feb 2014

Posted by mp3monster in Book Reviews, Books, General, Packt, Technology

≈ Leave a comment

Tags

Aaron Woody, book, data, datasec, enterprise, Packt, review, Security

So I have previously blogged a series of largely chapter by chapter reviews of Aaron Woody’s book Enterprise Security – A Data Centric Approach. This post tries to provide a brief summarised view pulling my thoughts of the book overall together.

As an Enterprise Architect I took an interest in this book as an opportunity to validate my understanding of security and ensure in the design and guidance work that I do I am providing good insights and directions so that the application architects and developers are both ensuring good security practices and also asking the helpful information available to other teams such as IT Security, operational support and so on.

The book has been overall very well written and extremely accessible to even those not versed in the dark arts of IT Security. Anyone in my position, or fulfilling a role as an application designer or product development manager would really benefit from this book. Even those on the business end of IT would probably benefit in terms of garnering an insight into what IT Security should be seeking to achieve and why they often appear to make lives more difficult (I.e. putting restrictions in, perhaps blocking your favourite websites).

So why so helpful, well Aaron has explained the issues and challenges that need to be confronted in terms of Security from the perspective of the organisations key assets – mainly its data (certainly the asset that is likely to cause most visible problems if compromised). Not only that the book presents a framework to help qualify and quantify the risks as a result device a justifiable approach to securing the data and most importantly make defensible cases for budget spend.

I have to admit that the 1st chapter that that introduces the initial step in the strategy was a bit of a struggle as it seemed to adopt and try to define a view of the world that felt a little too simplistic. The truth is that this the 1st step in a journey, and in hindsight important – so stick with it.

Once the basic framework is in place we start looking at tooling strategies and technologies to start facilitating security. The book addresses categories of product rather than specific solutions so the book isn’t going to date too quickly. The solution examination includes the pros and cons of their use (e.g wifi lock down) which is very helpful.

Finally to really help the book comes with a rich set of appendices providing a raft of references to additional material that will help people translate principles into practice.

To conclude, a little effort maybe needed to get you started but ultimately a well written, informative, information rich book on security.

Previous blog entries:

  • Chapter 1
  • Chapter 2
  • Chapter 3
  • Chapter 4
  • Chapter 5 & 6
  • Chapter 7 & 8
  • Final Chapter

There is also a supporting website for the book athttp://www.datacentricsec.com/
Enterprise Security - A Data Centric Approach

Share this:

  • Twitter
  • Facebook
  • LinkedIn
  • Print
  • Pocket
  • Email
  • Tumblr
  • Reddit
  • Pinterest
  • WhatsApp
  • Skype

Like this:

Like Loading...

Enterprise Security – A Data Centric Approach – the final chapter

05 Wednesday Feb 2014

Posted by mp3monster in Book Reviews, Books, General, Packt, Technology

≈ 1 Comment

Tags

Aaron Woody, book, data, enterprise, Packt, review, Security

so I have reached the final chapter of the book which covers the handling of security events and security incidents (the differentiation of the two being the consequences of the event – a piece of malware being detected on a desktop can an event as the consequences are relatively trivial compared to the defacing of an e’tailer’s website).

I have to admit I glossed through this chapter as my role within an organisation doesn’t demand the operational management of issues. That said, the book provides some clear guidance on how to develop a process to support the handling of a security issue – important as you don’t want be figuring these things out when something happens, you want to get on and focus on execution. s with previous chapters, this well written and doesn’t demand knowledge of security dark arts to get to grips with.

The book finishes with a series of appendices which provides some illustrative information for chapters in the book, plus a series of appendices of really useful additional reference information sites cover a spectrum of information from security education resources to security tools.

This series of blogs on this book will wrapped up with a short review of the whole book. But I would like to congratulate Aaron Woody on a fine book rich with helpful additional information.

Previous blog entries:

  • Chapter 1
  • Chapter 2
  • Chapter 3
  • Chapter 4
  • Chapter 5 & 6
  • Chapter 7 & 8

There is also a supporting website for the book athttp://www.datacentricsec.com/
Enterprise Security - A Data Centric Approach

Share this:

  • Twitter
  • Facebook
  • LinkedIn
  • Print
  • Pocket
  • Email
  • Tumblr
  • Reddit
  • Pinterest
  • WhatsApp
  • Skype

Like this:

Like Loading...

Enterprise Security – A Data Centric Approach – Chapter 4

01 Wednesday Jan 2014

Posted by mp3monster in Books, General, Technology

≈ 3 Comments

Tags

Aaron Woody, book, data, Data-Centric Approach, enterprise, Enterprise Security, network security, Security

Continuing into a chapter 4 of
Enterprise Security: A Data-Centric Approach to Securing the Enterprise by Aaron Woody we start to look at some technical aspects of security and technology covering things like the capabilities of new generation of firewalls, DNS security and so on. The information is presented in a very readable manner.

As an Enterprise Technology Architect, and having security specialist friends I thought I was reasonably well informed in this aspect of IT, but the book still taught me me things. Interestingly, perhaps not intended but the chapter left me with a number of things that could be incorporated into development governance that would make the work of network security a lot easier.

The chapter continues with lots of really helpful references many, maybe all are incorporated into a series of appendices that are full of helpful information references and links. If these are made available on the book’s website (see below) it would likely become a must go to site for security resources.

It does leave me asking one question how does this all fit in when using a PaaS solution such as those offered by the likes of Amazon and Rackspace?

Previous blog entries:

  • Chapter 1
  • Chapter 2
  • Chapter 3

The book has been published by Packt (who at the time of writing are running a promotion – more here)

There is also a supporting website for the book at http://www.datacentricsec.com/
Enterprise Security - A Data Centric Approach

Share this:

  • Twitter
  • Facebook
  • LinkedIn
  • Print
  • Pocket
  • Email
  • Tumblr
  • Reddit
  • Pinterest
  • WhatsApp
  • Skype

Like this:

Like Loading...

Enterprise Security: A Data-Centric Approach to Securing the Enterprise – book review

02 Tuesday Jul 2013

Posted by mp3monster in Book Reviews, Books, Packt, Technology

≈ Leave a comment

Tags

book, data, enterprise, Packt, review, Security

I have started to review another book, this time Enterprise Security: A Data-Centric Approach to Securing the Enterprise by Aaron Woody. Based on the interest that my review of Getting Started with Oracle Event Processing 11g I thought I’d follow a similar approach of reviewing one or two chapters at a time, although because of other constraints possibly not as quickly as last time.

As an enterprise architect, and having worked within some more sensitive environments which means security typically has a lock the world down, particularly at the perimeter. But with an increasingly less practical as we become ever more connected. Not to mention the tighter the old approaches are applied, the more the business will by pass IT (e.g. Go acquire SaaS solutions without IT support), the net result being a home goal in undermining the very thing you’re trying to achieve. So the killer question is, can the book show another way that works matching the challenges ranging from SaaS (software as a service) to BYOD (bring your own device – i.e. connecting your own smart phone to systems and work with them on the move etc) against the backdrop of increasing data legislation and commercial fallout (customer loss etc) as a result of security breaches becoming public knowledge.

Chapter 1 is very much a good scene setter, providing some of the background as to how security approaches have evolved over the last 30 or so years. It sets out some clear perspectives on the challenges of applying security such as

  • making cases for investment
  • Applying security as an overlay on a solution rather than being an integral part of a design and the impacts this can cause
  • The challenges of stakeholders involved
  • The mentality of just locking the perimeter (when statistics regularly show that increasing data leakages are a result of accident or malicious actions by those inside the organisation

The book also challenges the mentality of security is the network, which a grave mistake as security impacts processes and roles just as much as it does the software and physical infrastructures.

This sets up for the journey for defining an alternate approach starting with defining the boundaries that should be considered.

Share this:

  • Twitter
  • Facebook
  • LinkedIn
  • Print
  • Pocket
  • Email
  • Tumblr
  • Reddit
  • Pinterest
  • WhatsApp
  • Skype

Like this:

Like Loading...

Aliases

  • phil-wilkins.uk
  • cloud-native.info
  • oracle.cloud-native.info

I work for Oracle, all opinions here are my own & do not necessarily reflect the views of Oracle

Oracle Ace Director Alumni

TOGAF 9

Logging in Action

Oracle Cloud Integration Book

API Platform Book


Oracle Dev Meetup London

Categories

  • App Ideas
  • Books
    • Book Reviews
    • manning
    • Oracle Press
    • Packt
  • Enterprise architecture
  • General
    • economy
    • ExternalWebPublications
    • LinkedIn
    • Website
  • Music
    • Music Resources
    • Music Reviews
  • Photography
  • Podcasts
  • Technology
    • APIs & microservices
    • chatbots
    • Cloud
    • Cloud Native
    • Dev Meetup
    • development
      • languages
        • node.js
    • drone
    • Fluentd
    • logsimulator
    • mindmap
    • OMESA
    • Oracle
      • API Platform CS
        • tools
      • Helidon
      • ITSO & OEAF
      • Java Cloud
      • NodeJS Cloud
      • OIC – ICS
      • Oracle Cloud Native
      • OUG
    • railroad diagrams
    • TOGAF
  • xxRetired

My Other Web Content & Contributions

  • Amazon Author entry
  • API Platform
  • Dev Meetup (co-managed)
  • Fluentd Book
  • ICS Book Website
  • OMESA
  • Ora World
  • Oracle Community Directory
  • Packt Author Bio
  • Phil on Blogs.Oracle.com
  • Sessionize Profile

Enter your email address to subscribe to this blog and receive notifications of new posts by email.

Join 2,654 other subscribers

RSS

RSS Feed RSS - Posts

RSS Feed RSS - Comments

June 2023
M T W T F S S
 1234
567891011
12131415161718
19202122232425
2627282930  
« May    

Twitter

Tweets by mp3monster

History

Speaker Recognition

Open Source Summit Speaker

Flickr Pics

Architecture Summit at Bucharest Tech Week conferenceArchitecture Summit at Bucharest Tech Week conferenceArchitecture Summit at Bucharest Tech Week conferenceArchitecture Summit at Bucharest Tech Week conference
More Photos

    Social

    • View @mp3monster’s profile on Twitter
    • View philwilkins’s profile on LinkedIn
    • View mp3monster’s profile on GitHub
    • View mp3monster’s profile on Flickr
    • View philmp3monster’s profile on Twitch
    Follow Phil (aka MP3Monster)'s Blog on WordPress.com

    Blog at WordPress.com.

    • Follow Following
      • Phil (aka MP3Monster)'s Blog
      • Join 219 other followers
      • Already have a WordPress.com account? Log in now.
      • Phil (aka MP3Monster)'s Blog
      • Customize
      • Follow Following
      • Sign up
      • Log in
      • Report this content
      • View site in Reader
      • Manage subscriptions
      • Collapse this bar
     

    Loading Comments...
     

    You must be logged in to post a comment.

      Privacy & Cookies: This site uses cookies. By continuing to use this website, you agree to their use.
      To find out more, including how to control cookies, see here: Our Cookie Policy
      %d bloggers like this: