Since the dawn of software architecture, we mostly knew that 'Monolithic Architecture' was more or less a synonym for ball of mud architecture. It mostly referred to code which was so entangled and coupled that it could not be separated out into components (or layers, or modules, or libraries, or equivalent). God classes, lack of information hiding, a change in one place means 20 other changes. That kind of thing. A failure of Modularity.
Recently the term Monolith has been taken to mean having a Single Deployment Target at runtime. This is a quite different meaning.
If you think that Monolith as described in para 1 above is the same as Monolith in para 2 above, then I suggest that you have confused, not separated, your concerns.
This is easy to understand if you are comfortable with architecture views. The para 1 definition is about the logical and (in 4+1 lingo) development views: the structure and relationships of classes, components, packages, and other kinds of modularity. The para 2 definition is only about the runtime deployment view (in 4+1: physical view).
The point is that you are at liberty in pretty much any operating system, runtime or language devised in the last 30 years, to structure your code and components as carefully and modularly as you like, whilst choosing your runtime deployment scenario independently of that modularity: it's okay for 2 uncoupled components to run on the same machine. Honestly. *nix does it all the time. Ooh, so does Windows. And .Net and Java and Android and iOS and ....
The CTO at intilery.com showed me a couple of years ago how their server codebase can be deployed as either a single .war for a single webserver or split as separate .wars for separate machines by flicking a switch in the build config.
It's not rocket science, it's Separation of Concerns: the codebase is not the runtime.